Test Report: Docker_Linux_crio 22158

                    
                      84cd1e71ac9e612e02e936645952571e7d114b51:2025-12-16:42799
                    
                

Test fail (30/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.25
44 TestAddons/parallel/Registry 13.55
45 TestAddons/parallel/RegistryCreds 0.4
46 TestAddons/parallel/Ingress 146.07
47 TestAddons/parallel/InspektorGadget 6.25
48 TestAddons/parallel/MetricsServer 5.32
50 TestAddons/parallel/CSI 42.6
51 TestAddons/parallel/Headlamp 2.45
52 TestAddons/parallel/CloudSpanner 6.25
53 TestAddons/parallel/LocalPath 8.1
54 TestAddons/parallel/NvidiaDevicePlugin 5.26
55 TestAddons/parallel/Yakd 5.25
56 TestAddons/parallel/AmdGpuDevicePlugin 6.24
149 TestFunctional/parallel/ImageCommands/ImageListShort 2.33
150 TestFunctional/parallel/ImageCommands/ImageListTable 2.25
151 TestFunctional/parallel/ImageCommands/ImageListJson 2.24
152 TestFunctional/parallel/ImageCommands/ImageListYaml 2.37
294 TestJSONOutput/pause/Command 2.44
300 TestJSONOutput/unpause/Command 2.24
366 TestPause/serial/Pause 6.19
403 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.65
405 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.72
416 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.06
422 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.2
424 TestStartStop/group/old-k8s-version/serial/Pause 6.19
430 TestStartStop/group/no-preload/serial/Pause 6.56
439 TestStartStop/group/newest-cni/serial/Pause 5.81
446 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.22
448 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.31
475 TestStartStop/group/embed-certs/serial/Pause 6.81
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable volcano --alsologtostderr -v=1: exit status 11 (245.900586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:25.221378   18176 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:25.221665   18176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:25.221675   18176 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:25.221679   18176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:25.221848   18176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:25.222098   18176 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:25.222389   18176 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:25.222407   18176 addons.go:622] checking whether the cluster is paused
	I1216 02:27:25.222484   18176 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:25.222495   18176 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:25.222843   18176 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:25.240794   18176 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:25.240873   18176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:25.257622   18176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:25.353193   18176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:25.353274   18176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:25.381398   18176 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:25.381425   18176 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:25.381430   18176 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:25.381433   18176 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:25.381436   18176 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:25.381440   18176 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:25.381444   18176 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:25.381448   18176 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:25.381452   18176 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:25.381467   18176 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:25.381475   18176 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:25.381480   18176 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:25.381489   18176 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:25.381492   18176 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:25.381495   18176 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:25.381502   18176 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:25.381507   18176 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:25.381511   18176 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:25.381514   18176 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:25.381516   18176 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:25.381519   18176 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:25.381522   18176 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:25.381525   18176 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:25.381527   18176 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:25.381530   18176 cri.go:89] found id: ""
	I1216 02:27:25.381585   18176 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:25.395310   18176 out.go:203] 
	W1216 02:27:25.396639   18176 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:25.396657   18176 out.go:285] * 
	* 
	W1216 02:27:25.399771   18176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:25.401685   18176 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.085677ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002062158s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002542748s
addons_test.go:394: (dbg) Run:  kubectl --context addons-568105 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-568105 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-568105 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.107067366s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 ip
2025/12/16 02:27:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable registry --alsologtostderr -v=1: exit status 11 (237.049698ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:48.559887   21075 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:48.560063   21075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:48.560074   21075 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:48.560080   21075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:48.560311   21075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:48.560578   21075 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:48.560919   21075 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:48.560943   21075 addons.go:622] checking whether the cluster is paused
	I1216 02:27:48.561040   21075 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:48.561055   21075 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:48.561453   21075 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:48.579179   21075 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:48.579243   21075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:48.597007   21075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:48.694222   21075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:48.694304   21075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:48.721691   21075 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:48.721715   21075 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:48.721719   21075 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:48.721722   21075 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:48.721725   21075 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:48.721732   21075 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:48.721734   21075 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:48.721737   21075 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:48.721740   21075 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:48.721750   21075 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:48.721753   21075 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:48.721756   21075 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:48.721758   21075 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:48.721761   21075 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:48.721764   21075 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:48.721771   21075 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:48.721777   21075 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:48.721781   21075 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:48.721783   21075 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:48.721786   21075 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:48.721791   21075 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:48.721793   21075 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:48.721796   21075 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:48.721798   21075 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:48.721801   21075 cri.go:89] found id: ""
	I1216 02:27:48.721862   21075 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:48.735740   21075 out.go:203] 
	W1216 02:27:48.736869   21075 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:48.736891   21075 out.go:285] * 
	* 
	W1216 02:27:48.739784   21075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:48.741045   21075 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.712338ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-568105
addons_test.go:334: (dbg) Run:  kubectl --context addons-568105 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (247.03592ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:48.952965   21169 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:48.953084   21169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:48.953092   21169 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:48.953096   21169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:48.953308   21169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:48.953566   21169 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:48.953870   21169 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:48.953907   21169 addons.go:622] checking whether the cluster is paused
	I1216 02:27:48.953998   21169 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:48.954010   21169 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:48.954356   21169 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:48.972857   21169 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:48.972906   21169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:48.991397   21169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:49.090812   21169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:49.090899   21169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:49.122242   21169 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:49.122261   21169 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:49.122264   21169 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:49.122267   21169 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:49.122270   21169 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:49.122274   21169 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:49.122277   21169 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:49.122286   21169 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:49.122289   21169 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:49.122294   21169 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:49.122297   21169 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:49.122300   21169 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:49.122302   21169 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:49.122305   21169 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:49.122308   21169 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:49.122315   21169 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:49.122324   21169 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:49.122327   21169 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:49.122330   21169 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:49.122333   21169 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:49.122337   21169 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:49.122339   21169 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:49.122342   21169 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:49.122344   21169 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:49.122347   21169 cri.go:89] found id: ""
	I1216 02:27:49.122382   21169 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:49.137377   21169 out.go:203] 
	W1216 02:27:49.138607   21169 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:49.138628   21169 out.go:285] * 
	* 
	W1216 02:27:49.141941   21169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:49.143113   21169 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-568105 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-568105 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-568105 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [00250db6-eef3-4c7c-b9ab-f99a4cb5d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [00250db6-eef3-4c7c-b9ab-f99a4cb5d10b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003817094s
I1216 02:27:48.986946    8586 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.506601744s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-568105 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-568105
helpers_test.go:244: (dbg) docker inspect addons-568105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50",
	        "Created": "2025-12-16T02:25:45.591874719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11001,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:25:45.637033839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/hostname",
	        "HostsPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/hosts",
	        "LogPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50-json.log",
	        "Name": "/addons-568105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-568105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-568105",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50",
	                "LowerDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/merged",
	                "UpperDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/diff",
	                "WorkDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-568105",
	                "Source": "/var/lib/docker/volumes/addons-568105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-568105",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-568105",
	                "name.minikube.sigs.k8s.io": "addons-568105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "90190998bf440cbf3358c33fe0e1c32414ed2292d54af7e3d435caafa41d08a6",
	            "SandboxKey": "/var/run/docker/netns/90190998bf44",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-568105": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4d6946230c80378c13379f681aa4fc160da58c2330823871ef9d83121c1c0ec",
	                    "EndpointID": "1467eb93e9ce1c114b4079e0f13eaf54e3d8e07071eb8337021f2d2271312101",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:ad:f9:4e:a6:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-568105",
	                        "128ae821ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-568105 -n addons-568105
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-568105 logs -n 25: (1.119297442s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-346468 --alsologtostderr --binary-mirror http://127.0.0.1:41995 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-346468 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ -p binary-mirror-346468                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-346468 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ addons  │ enable dashboard -p addons-568105                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-568105                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ start   │ -p addons-568105 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:27 UTC │
	│ addons  │ addons-568105 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-568105 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ ssh     │ addons-568105 ssh cat /opt/local-path-provisioner/pvc-aac4cbfb-90a8-4cdd-bbae-a3dd306f3bb5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │ 16 Dec 25 02:27 UTC │
	│ addons  │ addons-568105 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ ip      │ addons-568105 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │ 16 Dec 25 02:27 UTC │
	│ addons  │ addons-568105 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-568105                                                                                                                                                                                                                                                                                                                                                                                           │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │ 16 Dec 25 02:27 UTC │
	│ addons  │ addons-568105 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ ssh     │ addons-568105 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │                     │
	│ addons  │ addons-568105 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │                     │
	│ ip      │ addons-568105 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-568105        │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:21.930987   10347 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:21.931081   10347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:21.931088   10347 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:21.931094   10347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:21.931271   10347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:25:21.931746   10347 out.go:368] Setting JSON to false
	I1216 02:25:21.932615   10347 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":474,"bootTime":1765851448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:21.932668   10347 start.go:143] virtualization: kvm guest
	I1216 02:25:21.934743   10347 out.go:179] * [addons-568105] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:21.935885   10347 notify.go:221] Checking for updates...
	I1216 02:25:21.935934   10347 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:25:21.937167   10347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:21.938606   10347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:25:21.940006   10347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:25:21.941230   10347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:25:21.942303   10347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:25:21.943564   10347 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:21.966291   10347 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:25:21.966394   10347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:22.017592   10347 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 02:25:22.007805985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:22.017686   10347 docker.go:319] overlay module found
	I1216 02:25:22.020011   10347 out.go:179] * Using the docker driver based on user configuration
	I1216 02:25:22.021067   10347 start.go:309] selected driver: docker
	I1216 02:25:22.021083   10347 start.go:927] validating driver "docker" against <nil>
	I1216 02:25:22.021094   10347 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:25:22.021575   10347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:22.074000   10347 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 02:25:22.065106704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:22.074178   10347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:22.074414   10347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:25:22.075929   10347 out.go:179] * Using Docker driver with root privileges
	I1216 02:25:22.077057   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:25:22.077119   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:25:22.077130   10347 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:25:22.077188   10347 start.go:353] cluster config:
	{Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 02:25:22.078380   10347 out.go:179] * Starting "addons-568105" primary control-plane node in "addons-568105" cluster
	I1216 02:25:22.079338   10347 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 02:25:22.080352   10347 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:25:22.081403   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:22.081441   10347 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:25:22.081449   10347 cache.go:65] Caching tarball of preloaded images
	I1216 02:25:22.081503   10347 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:25:22.081533   10347 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 02:25:22.081541   10347 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 02:25:22.081898   10347 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json ...
	I1216 02:25:22.081930   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json: {Name:mk21419428632a34a499f735ccdc8529f44bed77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:22.097619   10347 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:25:22.097734   10347 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 02:25:22.097752   10347 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory, skipping pull
	I1216 02:25:22.097756   10347 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in cache, skipping pull
	I1216 02:25:22.097763   10347 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb as a tarball
	I1216 02:25:22.097770   10347 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb from local cache
	I1216 02:25:35.517407   10347 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb from cached tarball
	I1216 02:25:35.517448   10347 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:25:35.517488   10347 start.go:360] acquireMachinesLock for addons-568105: {Name:mkff1bc43d5ab769de8a955435d1e20ee0b29deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:25:35.517599   10347 start.go:364] duration metric: took 91.641µs to acquireMachinesLock for "addons-568105"
	I1216 02:25:35.517624   10347 start.go:93] Provisioning new machine with config: &{Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:25:35.517712   10347 start.go:125] createHost starting for "" (driver="docker")
	I1216 02:25:35.519526   10347 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 02:25:35.519843   10347 start.go:159] libmachine.API.Create for "addons-568105" (driver="docker")
	I1216 02:25:35.519883   10347 client.go:173] LocalClient.Create starting
	I1216 02:25:35.520024   10347 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 02:25:35.632190   10347 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 02:25:35.862336   10347 cli_runner.go:164] Run: docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 02:25:35.879945   10347 cli_runner.go:211] docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 02:25:35.880012   10347 network_create.go:284] running [docker network inspect addons-568105] to gather additional debugging logs...
	I1216 02:25:35.880030   10347 cli_runner.go:164] Run: docker network inspect addons-568105
	W1216 02:25:35.895844   10347 cli_runner.go:211] docker network inspect addons-568105 returned with exit code 1
	I1216 02:25:35.895877   10347 network_create.go:287] error running [docker network inspect addons-568105]: docker network inspect addons-568105: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-568105 not found
	I1216 02:25:35.895899   10347 network_create.go:289] output of [docker network inspect addons-568105]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-568105 not found
	
	** /stderr **
	I1216 02:25:35.896033   10347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:25:35.912775   10347 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000aff220}
	I1216 02:25:35.912809   10347 network_create.go:124] attempt to create docker network addons-568105 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 02:25:35.912877   10347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-568105 addons-568105
	I1216 02:25:35.958030   10347 network_create.go:108] docker network addons-568105 192.168.49.0/24 created
	I1216 02:25:35.958060   10347 kic.go:121] calculated static IP "192.168.49.2" for the "addons-568105" container
	I1216 02:25:35.958128   10347 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 02:25:35.973295   10347 cli_runner.go:164] Run: docker volume create addons-568105 --label name.minikube.sigs.k8s.io=addons-568105 --label created_by.minikube.sigs.k8s.io=true
	I1216 02:25:35.991522   10347 oci.go:103] Successfully created a docker volume addons-568105
	I1216 02:25:35.991602   10347 cli_runner.go:164] Run: docker run --rm --name addons-568105-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --entrypoint /usr/bin/test -v addons-568105:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 02:25:41.738962   10347 cli_runner.go:217] Completed: docker run --rm --name addons-568105-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --entrypoint /usr/bin/test -v addons-568105:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib: (5.747315106s)
	I1216 02:25:41.738991   10347 oci.go:107] Successfully prepared a docker volume addons-568105
	I1216 02:25:41.739034   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:41.739047   10347 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 02:25:41.739121   10347 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-568105:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 02:25:45.525444   10347 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-568105:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.786256623s)
	I1216 02:25:45.525480   10347 kic.go:203] duration metric: took 3.786428316s to extract preloaded images to volume ...
	W1216 02:25:45.525574   10347 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 02:25:45.525606   10347 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 02:25:45.525642   10347 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 02:25:45.576400   10347 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-568105 --name addons-568105 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-568105 --network addons-568105 --ip 192.168.49.2 --volume addons-568105:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 02:25:45.875699   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Running}}
	I1216 02:25:45.895729   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:45.913183   10347 cli_runner.go:164] Run: docker exec addons-568105 stat /var/lib/dpkg/alternatives/iptables
	I1216 02:25:45.958954   10347 oci.go:144] the created container "addons-568105" has a running status.
	I1216 02:25:45.958986   10347 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa...
	I1216 02:25:46.062281   10347 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 02:25:46.085898   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:46.104978   10347 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 02:25:46.105003   10347 kic_runner.go:114] Args: [docker exec --privileged addons-568105 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 02:25:46.153559   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:46.177713   10347 machine.go:94] provisionDockerMachine start ...
	I1216 02:25:46.177812   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:46.202110   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:46.202453   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:46.202474   10347 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:25:46.203799   10347 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44284->127.0.0.1:32768: read: connection reset by peer
	I1216 02:25:49.339555   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-568105
	
	I1216 02:25:49.339582   10347 ubuntu.go:182] provisioning hostname "addons-568105"
	I1216 02:25:49.339636   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.358553   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.358762   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.358775   10347 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-568105 && echo "addons-568105" | sudo tee /etc/hostname
	I1216 02:25:49.500259   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-568105
	
	I1216 02:25:49.500340   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.518189   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.518395   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.518412   10347 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-568105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-568105/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-568105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:25:49.652285   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:25:49.652317   10347 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 02:25:49.652347   10347 ubuntu.go:190] setting up certificates
	I1216 02:25:49.652365   10347 provision.go:84] configureAuth start
	I1216 02:25:49.652411   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:49.669378   10347 provision.go:143] copyHostCerts
	I1216 02:25:49.669441   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 02:25:49.669541   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 02:25:49.669603   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 02:25:49.669651   10347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.addons-568105 san=[127.0.0.1 192.168.49.2 addons-568105 localhost minikube]
	I1216 02:25:49.729533   10347 provision.go:177] copyRemoteCerts
	I1216 02:25:49.729585   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:25:49.729632   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.746468   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:49.842779   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 02:25:49.861132   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:25:49.877314   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 02:25:49.893243   10347 provision.go:87] duration metric: took 240.860769ms to configureAuth
	I1216 02:25:49.893267   10347 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:25:49.893411   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:25:49.893502   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.910832   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.911052   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.911072   10347 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 02:25:50.172231   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 02:25:50.172257   10347 machine.go:97] duration metric: took 3.994517599s to provisionDockerMachine
	I1216 02:25:50.172269   10347 client.go:176] duration metric: took 14.652376853s to LocalClient.Create
	I1216 02:25:50.172289   10347 start.go:167] duration metric: took 14.652449708s to libmachine.API.Create "addons-568105"
	I1216 02:25:50.172299   10347 start.go:293] postStartSetup for "addons-568105" (driver="docker")
	I1216 02:25:50.172311   10347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:25:50.172370   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:25:50.172415   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.189371   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.287514   10347 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:25:50.290710   10347 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:25:50.290751   10347 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:25:50.290765   10347 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 02:25:50.290836   10347 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 02:25:50.290869   10347 start.go:296] duration metric: took 118.56362ms for postStartSetup
	I1216 02:25:50.291126   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:50.309296   10347 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json ...
	I1216 02:25:50.309545   10347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:25:50.309584   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.326207   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.418809   10347 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:25:50.423232   10347 start.go:128] duration metric: took 14.905503973s to createHost
	I1216 02:25:50.423256   10347 start.go:83] releasing machines lock for "addons-568105", held for 14.905644563s
	I1216 02:25:50.423331   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:50.440177   10347 ssh_runner.go:195] Run: cat /version.json
	I1216 02:25:50.440231   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.440286   10347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:25:50.440350   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.457071   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.458632   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.604878   10347 ssh_runner.go:195] Run: systemctl --version
	I1216 02:25:50.611079   10347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 02:25:50.644095   10347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:25:50.648433   10347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:25:50.648489   10347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:25:50.672260   10347 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 02:25:50.672278   10347 start.go:496] detecting cgroup driver to use...
	I1216 02:25:50.672304   10347 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 02:25:50.672343   10347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 02:25:50.687302   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 02:25:50.698743   10347 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:25:50.698796   10347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:25:50.714050   10347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:25:50.730017   10347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:25:50.803613   10347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:25:50.889875   10347 docker.go:234] disabling docker service ...
	I1216 02:25:50.889938   10347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:25:50.907223   10347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:25:50.918886   10347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:25:50.998352   10347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:25:51.076588   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:25:51.088293   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:25:51.101953   10347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 02:25:51.102008   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.111472   10347 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 02:25:51.111519   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.119625   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.127363   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.135386   10347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:25:51.142890   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.150629   10347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.163097   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.171090   10347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:25:51.177890   10347 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 02:25:51.177939   10347 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 02:25:51.189329   10347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:25:51.196539   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:25:51.273292   10347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 02:25:51.407013   10347 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 02:25:51.407092   10347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 02:25:51.410879   10347 start.go:564] Will wait 60s for crictl version
	I1216 02:25:51.410946   10347 ssh_runner.go:195] Run: which crictl
	I1216 02:25:51.414388   10347 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:25:51.438696   10347 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 02:25:51.438811   10347 ssh_runner.go:195] Run: crio --version
	I1216 02:25:51.465175   10347 ssh_runner.go:195] Run: crio --version
	I1216 02:25:51.493237   10347 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 02:25:51.494534   10347 cli_runner.go:164] Run: docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:25:51.512979   10347 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:25:51.516951   10347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:25:51.526490   10347 kubeadm.go:884] updating cluster {Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:25:51.526613   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:51.526657   10347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:25:51.554313   10347 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:25:51.554335   10347 crio.go:433] Images already preloaded, skipping extraction
	I1216 02:25:51.554389   10347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:25:51.577952   10347 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:25:51.577978   10347 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:25:51.577986   10347 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 02:25:51.578074   10347 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-568105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:25:51.578133   10347 ssh_runner.go:195] Run: crio config
	I1216 02:25:51.620692   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:25:51.620722   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:25:51.620744   10347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:25:51.620766   10347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-568105 NodeName:addons-568105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:25:51.620917   10347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-568105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:25:51.620984   10347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 02:25:51.629112   10347 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:25:51.629178   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:25:51.636711   10347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 02:25:51.648482   10347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 02:25:51.663072   10347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 02:25:51.674801   10347 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:25:51.678229   10347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:25:51.687431   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:25:51.765677   10347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:25:51.789150   10347 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105 for IP: 192.168.49.2
	I1216 02:25:51.789173   10347 certs.go:195] generating shared ca certs ...
	I1216 02:25:51.789191   10347 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.789344   10347 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 02:25:51.903348   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt ...
	I1216 02:25:51.903377   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt: {Name:mka3bd05f062522bac970d87e69a6f4541c67945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.903577   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key ...
	I1216 02:25:51.903592   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key: {Name:mk6c16b6cf95261037ec88d060ec3f6c89fbea36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.903699   10347 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 02:25:51.962269   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt ...
	I1216 02:25:51.962295   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt: {Name:mk881062a9d4092bfcf46f29ecf2d3c3cbf1d6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.962459   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key ...
	I1216 02:25:51.962469   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key: {Name:mk85c89aeac918c8ed9e2f62e347511843d6bb33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.962542   10347 certs.go:257] generating profile certs ...
	I1216 02:25:51.962599   10347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key
	I1216 02:25:51.962613   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt with IP's: []
	I1216 02:25:51.990650   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt ...
	I1216 02:25:51.990675   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: {Name:mk89d973e054d2af0d0d12fa72da63d7b7cc951c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.990854   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key ...
	I1216 02:25:51.990865   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key: {Name:mk23ce2e5798b14f25ddc24f8ad21860e4d2d95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.990938   10347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c
	I1216 02:25:51.990958   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 02:25:52.204013   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c ...
	I1216 02:25:52.204041   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c: {Name:mk6eac4c01d5db7800a0de5ec0cd6c917cf0a3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.204195   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c ...
	I1216 02:25:52.204208   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c: {Name:mkcf6edc0553dad82dfe4abad1fca12f2e8af338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.204294   10347 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt
	I1216 02:25:52.204389   10347 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key
	I1216 02:25:52.204446   10347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key
	I1216 02:25:52.204465   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt with IP's: []
	I1216 02:25:52.285957   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt ...
	I1216 02:25:52.285984   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt: {Name:mk39bcf943bc32d6118697cd1443c5bf53423ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.286142   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key ...
	I1216 02:25:52.286152   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key: {Name:mk333ddc04925678bf1d04fd5cf85be03a1194f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.286333   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 02:25:52.286368   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 02:25:52.286393   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:25:52.286430   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 02:25:52.287001   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:25:52.304761   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 02:25:52.320927   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:25:52.336881   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:25:52.353099   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 02:25:52.369121   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:25:52.385237   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:25:52.401076   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:25:52.416676   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:25:52.434375   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:25:52.445831   10347 ssh_runner.go:195] Run: openssl version
	I1216 02:25:52.451688   10347 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.458523   10347 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:25:52.467351   10347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.470570   10347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.470610   10347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.504227   10347 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:25:52.512063   10347 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 02:25:52.518936   10347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:25:52.522277   10347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 02:25:52.522322   10347 kubeadm.go:401] StartCluster: {Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:25:52.522401   10347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:25:52.522448   10347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:25:52.547576   10347 cri.go:89] found id: ""
	I1216 02:25:52.547629   10347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:25:52.555167   10347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:25:52.562515   10347 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 02:25:52.562558   10347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:25:52.569520   10347 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:25:52.569537   10347 kubeadm.go:158] found existing configuration files:
	
	I1216 02:25:52.569577   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 02:25:52.576487   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:25:52.576531   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:25:52.583397   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 02:25:52.590986   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:25:52.591038   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:25:52.598043   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 02:25:52.606198   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:25:52.606248   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:25:52.613321   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 02:25:52.620578   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:25:52.620622   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:25:52.627533   10347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 02:25:52.660964   10347 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 02:25:52.661043   10347 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 02:25:52.679212   10347 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 02:25:52.679321   10347 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 02:25:52.679365   10347 kubeadm.go:319] OS: Linux
	I1216 02:25:52.679407   10347 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 02:25:52.679449   10347 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 02:25:52.679495   10347 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 02:25:52.679542   10347 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 02:25:52.679583   10347 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 02:25:52.679651   10347 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 02:25:52.679722   10347 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 02:25:52.679789   10347 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 02:25:52.732786   10347 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 02:25:52.732938   10347 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 02:25:52.733072   10347 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 02:25:52.740356   10347 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 02:25:52.743079   10347 out.go:252]   - Generating certificates and keys ...
	I1216 02:25:52.743188   10347 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 02:25:52.743265   10347 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 02:25:52.864090   10347 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 02:25:53.096154   10347 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 02:25:53.160673   10347 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 02:25:53.786925   10347 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 02:25:54.254541   10347 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 02:25:54.254684   10347 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-568105 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:25:54.740973   10347 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 02:25:54.741098   10347 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-568105 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:25:55.030131   10347 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 02:25:55.293192   10347 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 02:25:55.438431   10347 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 02:25:55.438493   10347 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 02:25:55.511628   10347 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 02:25:55.783281   10347 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 02:25:55.886088   10347 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 02:25:56.053726   10347 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 02:25:56.102775   10347 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 02:25:56.103292   10347 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 02:25:56.107881   10347 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 02:25:56.109374   10347 out.go:252]   - Booting up control plane ...
	I1216 02:25:56.109514   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:25:56.109625   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:25:56.110172   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:25:56.122948   10347 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:25:56.123087   10347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:25:56.129134   10347 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:25:56.129413   10347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:25:56.129479   10347 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:25:56.223556   10347 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:25:56.223710   10347 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:25:56.725256   10347 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.784823ms
	I1216 02:25:56.729126   10347 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 02:25:56.729274   10347 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 02:25:56.729367   10347 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 02:25:56.729472   10347 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 02:25:58.219737   10347 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.490428677s
	I1216 02:25:58.733693   10347 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.00446933s
	I1216 02:26:00.230982   10347 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501745765s
	I1216 02:26:00.246959   10347 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 02:26:00.256668   10347 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 02:26:00.265809   10347 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 02:26:00.266118   10347 kubeadm.go:319] [mark-control-plane] Marking the node addons-568105 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 02:26:00.274990   10347 kubeadm.go:319] [bootstrap-token] Using token: pcp3la.vbq2i6sf71q8sp7z
	I1216 02:26:00.276388   10347 out.go:252]   - Configuring RBAC rules ...
	I1216 02:26:00.276526   10347 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 02:26:00.279346   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 02:26:00.284235   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 02:26:00.286296   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 02:26:00.289344   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 02:26:00.291382   10347 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 02:26:00.637592   10347 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 02:26:01.051691   10347 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 02:26:01.636744   10347 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 02:26:01.637743   10347 kubeadm.go:319] 
	I1216 02:26:01.637908   10347 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 02:26:01.637928   10347 kubeadm.go:319] 
	I1216 02:26:01.638052   10347 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 02:26:01.638069   10347 kubeadm.go:319] 
	I1216 02:26:01.638104   10347 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 02:26:01.638223   10347 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 02:26:01.638314   10347 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 02:26:01.638324   10347 kubeadm.go:319] 
	I1216 02:26:01.638370   10347 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 02:26:01.638376   10347 kubeadm.go:319] 
	I1216 02:26:01.638415   10347 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 02:26:01.638421   10347 kubeadm.go:319] 
	I1216 02:26:01.638463   10347 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 02:26:01.638528   10347 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 02:26:01.638588   10347 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 02:26:01.638598   10347 kubeadm.go:319] 
	I1216 02:26:01.638692   10347 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 02:26:01.638804   10347 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 02:26:01.638836   10347 kubeadm.go:319] 
	I1216 02:26:01.638926   10347 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token pcp3la.vbq2i6sf71q8sp7z \
	I1216 02:26:01.639067   10347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 02:26:01.639099   10347 kubeadm.go:319] 	--control-plane 
	I1216 02:26:01.639113   10347 kubeadm.go:319] 
	I1216 02:26:01.639246   10347 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 02:26:01.639260   10347 kubeadm.go:319] 
	I1216 02:26:01.639390   10347 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token pcp3la.vbq2i6sf71q8sp7z \
	I1216 02:26:01.639525   10347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 02:26:01.641369   10347 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 02:26:01.641599   10347 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:26:01.641628   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:26:01.641637   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:26:01.643234   10347 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 02:26:01.644450   10347 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 02:26:01.648499   10347 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 02:26:01.648517   10347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 02:26:01.660768   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 02:26:01.856617   10347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 02:26:01.856696   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:01.856724   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-568105 minikube.k8s.io/updated_at=2025_12_16T02_26_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-568105 minikube.k8s.io/primary=true
	I1216 02:26:01.936760   10347 ops.go:34] apiserver oom_adj: -16
	I1216 02:26:01.936766   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:02.436902   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:02.936888   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:03.437599   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:03.936944   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:04.437884   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:04.937002   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:05.437616   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:05.936923   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:06.436891   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:06.498764   10347 kubeadm.go:1114] duration metric: took 4.642125373s to wait for elevateKubeSystemPrivileges
	I1216 02:26:06.498799   10347 kubeadm.go:403] duration metric: took 13.976480172s to StartCluster
	I1216 02:26:06.498838   10347 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:06.498979   10347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:26:06.499527   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:06.499734   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 02:26:06.499779   10347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:26:06.499841   10347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 02:26:06.499980   10347 addons.go:70] Setting default-storageclass=true in profile "addons-568105"
	I1216 02:26:06.499991   10347 addons.go:70] Setting yakd=true in profile "addons-568105"
	I1216 02:26:06.499999   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:06.500012   10347 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-568105"
	I1216 02:26:06.500026   10347 addons.go:70] Setting registry=true in profile "addons-568105"
	I1216 02:26:06.500037   10347 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-568105"
	I1216 02:26:06.500006   10347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-568105"
	I1216 02:26:06.500041   10347 addons.go:70] Setting metrics-server=true in profile "addons-568105"
	I1216 02:26:06.500049   10347 addons.go:239] Setting addon registry=true in "addons-568105"
	I1216 02:26:06.500057   10347 addons.go:239] Setting addon metrics-server=true in "addons-568105"
	I1216 02:26:06.500041   10347 addons.go:70] Setting ingress-dns=true in profile "addons-568105"
	I1216 02:26:06.500077   10347 addons.go:70] Setting cloud-spanner=true in profile "addons-568105"
	I1216 02:26:06.500080   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500090   10347 addons.go:239] Setting addon ingress-dns=true in "addons-568105"
	I1216 02:26:06.500098   10347 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-568105"
	I1216 02:26:06.500082   10347 addons.go:70] Setting inspektor-gadget=true in profile "addons-568105"
	I1216 02:26:06.500129   10347 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-568105"
	I1216 02:26:06.500143   10347 addons.go:239] Setting addon inspektor-gadget=true in "addons-568105"
	I1216 02:26:06.500152   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500162   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500189   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500200   10347 addons.go:70] Setting ingress=true in profile "addons-568105"
	I1216 02:26:06.500213   10347 addons.go:239] Setting addon ingress=true in "addons-568105"
	I1216 02:26:06.500237   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500406   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500568   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500639   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500658   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500675   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500680   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500858   10347 addons.go:70] Setting gcp-auth=true in profile "addons-568105"
	I1216 02:26:06.500901   10347 mustload.go:66] Loading cluster: addons-568105
	I1216 02:26:06.501081   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:06.501393   10347 addons.go:70] Setting registry-creds=true in profile "addons-568105"
	I1216 02:26:06.501411   10347 addons.go:239] Setting addon registry-creds=true in "addons-568105"
	I1216 02:26:06.501435   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.501905   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500017   10347 addons.go:239] Setting addon yakd=true in "addons-568105"
	I1216 02:26:06.502398   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.502525   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500071   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.503969   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.505147   10347 addons.go:70] Setting volumesnapshots=true in profile "addons-568105"
	I1216 02:26:06.505181   10347 addons.go:239] Setting addon volumesnapshots=true in "addons-568105"
	I1216 02:26:06.505227   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500021   10347 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-568105"
	I1216 02:26:06.505479   10347 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-568105"
	I1216 02:26:06.505510   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.505755   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.506009   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.506246   10347 out.go:179] * Verifying Kubernetes components...
	I1216 02:26:06.506278   10347 addons.go:70] Setting volcano=true in profile "addons-568105"
	I1216 02:26:06.506281   10347 addons.go:70] Setting storage-provisioner=true in profile "addons-568105"
	I1216 02:26:06.506295   10347 addons.go:239] Setting addon volcano=true in "addons-568105"
	I1216 02:26:06.506300   10347 addons.go:239] Setting addon storage-provisioner=true in "addons-568105"
	I1216 02:26:06.500092   10347 addons.go:239] Setting addon cloud-spanner=true in "addons-568105"
	I1216 02:26:06.506321   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506325   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506331   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506268   10347 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-568105"
	I1216 02:26:06.506503   10347 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-568105"
	I1216 02:26:06.500092   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.509938   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:26:06.513446   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.513868   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.514463   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.515375   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.516349   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.516655   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.558966   10347 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 02:26:06.560680   10347 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:06.560705   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 02:26:06.560768   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.569861   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:06.570268   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 02:26:06.573272   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 02:26:06.574622   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 02:26:06.574809   10347 addons.go:239] Setting addon default-storageclass=true in "addons-568105"
	I1216 02:26:06.574897   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.575478   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.576138   10347 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 02:26:06.576159   10347 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 02:26:06.576170   10347 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 02:26:06.577616   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 02:26:06.577654   10347 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:06.577664   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 02:26:06.577715   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.578495   10347 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:06.578519   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 02:26:06.578588   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.579480   10347 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 02:26:06.580352   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 02:26:06.580475   10347 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 02:26:06.580486   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 02:26:06.580541   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.582040   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 02:26:06.582841   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 02:26:06.585983   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:06.588000   10347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:26:06.588220   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 02:26:06.589254   10347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:06.589275   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:26:06.589335   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.589764   10347 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:06.589783   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 02:26:06.589845   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.591332   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 02:26:06.591975   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.592360   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 02:26:06.592380   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 02:26:06.592426   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.592591   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 02:26:06.593710   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 02:26:06.593728   10347 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 02:26:06.593785   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.609678   10347 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 02:26:06.611870   10347 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 02:26:06.612552   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 02:26:06.612569   10347 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 02:26:06.612628   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.613304   10347 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:06.613319   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 02:26:06.613379   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.615302   10347 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 02:26:06.616633   10347 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:06.616934   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 02:26:06.617140   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	W1216 02:26:06.616707   10347 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 02:26:06.617655   10347 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-568105"
	I1216 02:26:06.617710   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.618268   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.625282   10347 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 02:26:06.626480   10347 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:06.626500   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 02:26:06.626557   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.633238   10347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:06.633262   10347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:26:06.633331   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.645207   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.646130   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.648253   10347 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 02:26:06.649352   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 02:26:06.649384   10347 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 02:26:06.649448   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.654890   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.661327   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.669848   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.679770   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.687058   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688047   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688373   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688454   10347 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 02:26:06.690570   10347 out.go:179]   - Using image docker.io/busybox:stable
	I1216 02:26:06.690663   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.692106   10347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:06.692125   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 02:26:06.692182   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.698266   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.704932   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.706432   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 02:26:06.724104   10347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:26:06.724481   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.730995   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.739156   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.855807   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:06.860299   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 02:26:06.860320   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 02:26:06.865082   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:06.866396   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:06.876716   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:06.878046   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 02:26:06.878071   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 02:26:06.879156   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:06.881285   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:06.884349   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:06.917295   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 02:26:06.917327   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 02:26:06.931646   10347 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 02:26:06.931685   10347 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 02:26:06.936461   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:06.938418   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 02:26:06.938463   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 02:26:06.945899   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 02:26:06.945930   10347 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 02:26:06.947286   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:06.961505   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 02:26:06.961536   10347 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 02:26:06.975027   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 02:26:06.975058   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 02:26:06.980704   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:06.990433   10347 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:06.990462   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 02:26:06.999623   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 02:26:06.999657   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 02:26:07.016992   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:07.017025   10347 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 02:26:07.039533   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 02:26:07.039562   10347 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 02:26:07.052285   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 02:26:07.052333   10347 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 02:26:07.057333   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:07.065343   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 02:26:07.065395   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 02:26:07.071750   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:07.089903   10347 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:07.089932   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 02:26:07.123965   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 02:26:07.123992   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 02:26:07.127914   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 02:26:07.127938   10347 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 02:26:07.148202   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:07.194760   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 02:26:07.194792   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 02:26:07.210000   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:07.210026   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 02:26:07.274043   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:07.283256   10347 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 02:26:07.287020   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 02:26:07.287053   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 02:26:07.287974   10347 node_ready.go:35] waiting up to 6m0s for node "addons-568105" to be "Ready" ...
	I1216 02:26:07.336406   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 02:26:07.336440   10347 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 02:26:07.388258   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 02:26:07.388287   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 02:26:07.431679   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 02:26:07.431706   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 02:26:07.458923   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:07.458953   10347 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 02:26:07.491411   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:07.799148   10347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-568105" context rescaled to 1 replicas
	I1216 02:26:08.048936   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193074656s)
	I1216 02:26:08.048975   10347 addons.go:495] Verifying addon ingress=true in "addons-568105"
	I1216 02:26:08.049002   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.183888431s)
	I1216 02:26:08.049090   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.182673618s)
	I1216 02:26:08.049144   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.172399638s)
	I1216 02:26:08.049218   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.170028525s)
	I1216 02:26:08.049270   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167959139s)
	I1216 02:26:08.049322   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.164948993s)
	I1216 02:26:08.049383   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112894818s)
	I1216 02:26:08.049437   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.102111409s)
	I1216 02:26:08.049517   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.068780119s)
	I1216 02:26:08.049582   10347 addons.go:495] Verifying addon registry=true in "addons-568105"
	I1216 02:26:08.049650   10347 addons.go:495] Verifying addon metrics-server=true in "addons-568105"
	I1216 02:26:08.050436   10347 out.go:179] * Verifying ingress addon...
	I1216 02:26:08.051282   10347 out.go:179] * Verifying registry addon...
	I1216 02:26:08.052808   10347 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 02:26:08.057317   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 02:26:08.058156   10347 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1216 02:26:08.058836   10347 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 02:26:08.062272   10347 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 02:26:08.062294   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:08.488200   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.339942347s)
	W1216 02:26:08.488262   10347 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:08.488296   10347 retry.go:31] will retry after 345.601988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:08.488375   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.21430215s)
	I1216 02:26:08.488863   10347 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-568105"
	I1216 02:26:08.489772   10347 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 02:26:08.489777   10347 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-568105 service yakd-dashboard -n yakd-dashboard
	
	I1216 02:26:08.492365   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 02:26:08.495888   10347 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 02:26:08.495918   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:08.596419   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:08.596515   10347 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 02:26:08.596536   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:08.834760   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:08.996003   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:09.097348   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:09.097399   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1216 02:26:09.291178   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:09.496061   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:09.596512   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:09.596732   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:09.995475   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:10.096443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:10.096622   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:10.495460   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:10.555989   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:10.559528   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:10.996208   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:11.096506   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:11.096680   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:11.260052   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.425243565s)
	I1216 02:26:11.495752   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:11.595882   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:11.596075   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1216 02:26:11.790538   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:11.995677   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:12.096420   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:12.096629   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:12.495768   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:12.556227   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:12.559994   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:12.995484   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:13.096354   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:13.096410   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:13.495849   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:13.556410   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:13.560261   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:13.791387   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:13.996228   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:14.096511   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:14.096568   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:14.199797   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 02:26:14.199880   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:14.217865   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:14.319486   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 02:26:14.331369   10347 addons.go:239] Setting addon gcp-auth=true in "addons-568105"
	I1216 02:26:14.331429   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:14.331767   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:14.349501   10347 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 02:26:14.349552   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:14.367213   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:14.462505   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:14.463876   10347 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 02:26:14.464879   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 02:26:14.464893   10347 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 02:26:14.476946   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 02:26:14.476970   10347 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 02:26:14.488935   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:14.488954   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 02:26:14.495702   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:14.501493   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:14.556095   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:14.560434   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:14.790177   10347 addons.go:495] Verifying addon gcp-auth=true in "addons-568105"
	I1216 02:26:14.791491   10347 out.go:179] * Verifying gcp-auth addon...
	I1216 02:26:14.793468   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 02:26:14.795159   10347 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 02:26:14.795172   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:14.996184   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:15.055712   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:15.059413   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:15.296335   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:15.494650   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:15.556081   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:15.559707   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:15.796489   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:15.995290   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:16.055873   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:16.059897   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:16.291320   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:16.296245   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:16.495691   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:16.556172   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:16.560028   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:16.796071   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:16.995547   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:17.056027   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:17.059670   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:17.296074   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:17.495628   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:17.556222   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:17.560106   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:17.796559   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:17.995411   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:18.056032   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:18.059618   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:18.296087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:18.495627   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:18.556033   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:18.559768   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:18.791086   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:18.795979   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:18.995383   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:19.055933   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:19.059853   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:19.296266   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:19.496031   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:19.555498   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:19.559060   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:19.796557   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:19.995155   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:20.055728   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:20.059416   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:20.296420   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:20.494894   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:20.556329   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:20.560163   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:20.796057   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:20.995047   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:21.055294   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:21.060197   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:21.290686   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:21.295656   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:21.494977   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:21.555554   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:21.559489   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:21.796634   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:21.995570   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:22.055961   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:22.059717   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:22.296215   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:22.495868   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:22.556244   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:22.559974   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:22.795522   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:22.995029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:23.056435   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:23.060360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:23.290994   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:23.295802   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:23.495287   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:23.555676   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:23.559546   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:23.796162   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:23.995986   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:24.055362   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:24.060140   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:24.296332   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:24.495668   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:24.555953   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:24.559710   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:24.796134   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:24.995629   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:25.055994   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:25.059673   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:25.291218   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:25.296260   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:25.495523   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:25.555611   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:25.559473   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:25.795687   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:25.995479   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:26.055921   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:26.059669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:26.296321   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:26.495918   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:26.556402   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:26.560207   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:26.795670   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:26.995360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:27.055938   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:27.059671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:27.291429   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:27.296441   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:27.495957   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:27.556235   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:27.560039   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:27.796374   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:27.995180   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:28.055788   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:28.059432   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:28.295887   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:28.495486   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:28.555927   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:28.559570   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:28.796178   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:28.995706   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:29.056242   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:29.060091   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:29.295645   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:29.494961   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:29.556510   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:29.559134   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:29.790661   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:29.796190   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:29.996222   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:30.055475   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:30.059204   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:30.296270   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:30.495606   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:30.555998   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:30.559707   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:30.796331   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:30.994777   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:31.056162   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:31.060098   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:31.296384   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:31.494704   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:31.556504   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:31.559292   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:31.796008   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:31.995955   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:32.056398   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:32.060280   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:32.290870   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:32.295808   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:32.495385   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:32.555831   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:32.559669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:32.796089   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:32.995676   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:33.055991   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:33.059845   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:33.296165   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:33.495526   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:33.555925   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:33.559893   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:33.796403   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:33.995042   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:34.055437   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:34.060207   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:34.295804   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:34.495202   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:34.555847   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:34.559507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:34.791024   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:34.795758   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:34.995141   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:35.055705   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:35.059465   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:35.295856   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:35.495590   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:35.556344   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:35.560143   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:35.796268   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:35.995762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:36.056118   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:36.059861   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:36.296093   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:36.495562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:36.556058   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:36.560051   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:36.796536   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:36.995212   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:37.055784   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:37.059762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:37.291219   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:37.296279   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:37.495945   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:37.556357   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:37.560219   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:37.795762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:37.995304   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:38.055651   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:38.059500   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:38.295765   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:38.495142   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:38.555516   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:38.559269   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:38.795669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:38.995373   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:39.055977   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:39.059744   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:39.291359   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:39.296583   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:39.494902   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:39.555166   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:39.559886   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:39.796699   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:39.995443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:40.055750   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:40.059568   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:40.296300   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:40.494863   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:40.556718   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:40.559352   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:40.795692   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:40.995271   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:41.055664   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:41.059403   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:41.295847   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:41.495086   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:41.555644   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:41.559448   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:41.791085   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:41.796307   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:41.995177   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:42.055738   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:42.059498   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:42.296047   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:42.495514   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:42.556160   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:42.559967   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:42.796640   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:42.995042   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:43.055324   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:43.060065   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:43.296507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:43.494684   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:43.556089   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:43.559722   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:43.791320   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:43.796349   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:43.995798   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:44.056423   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:44.060330   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:44.295784   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:44.495346   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:44.555682   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:44.559397   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:44.796101   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:44.995309   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:45.055741   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:45.059563   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:45.296317   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:45.494786   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:45.556224   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:45.559968   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:45.796572   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:45.994929   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:46.056321   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:46.060163   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:46.290649   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:46.295569   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:46.495101   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:46.555510   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:46.559254   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:46.796260   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:46.995094   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:47.056025   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:47.059749   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:47.295677   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:47.495267   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:47.555968   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:47.559911   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:47.796252   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:47.996314   10347 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 02:26:47.996338   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:48.058443   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:48.059366   10347 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 02:26:48.059381   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:48.293182   10347 node_ready.go:49] node "addons-568105" is "Ready"
	I1216 02:26:48.293216   10347 node_ready.go:38] duration metric: took 41.00521133s for node "addons-568105" to be "Ready" ...
	I1216 02:26:48.293239   10347 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:26:48.293300   10347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:26:48.297581   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:48.312717   10347 api_server.go:72] duration metric: took 41.812902671s to wait for apiserver process to appear ...
	I1216 02:26:48.312744   10347 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:26:48.312771   10347 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 02:26:48.318213   10347 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 02:26:48.319544   10347 api_server.go:141] control plane version: v1.34.2
	I1216 02:26:48.319577   10347 api_server.go:131] duration metric: took 6.825655ms to wait for apiserver health ...
	I1216 02:26:48.319588   10347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 02:26:48.401858   10347 system_pods.go:59] 20 kube-system pods found
	I1216 02:26:48.401903   10347 system_pods.go:61] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.401914   10347 system_pods.go:61] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.401940   10347 system_pods.go:61] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.401949   10347 system_pods.go:61] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.401958   10347 system_pods.go:61] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.401965   10347 system_pods.go:61] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.401971   10347 system_pods.go:61] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.401976   10347 system_pods.go:61] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.401981   10347 system_pods.go:61] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.401990   10347 system_pods.go:61] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.401996   10347 system_pods.go:61] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.402002   10347 system_pods.go:61] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.402010   10347 system_pods.go:61] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.402019   10347 system_pods.go:61] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.402027   10347 system_pods.go:61] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.402035   10347 system_pods.go:61] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.402042   10347 system_pods.go:61] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.402053   10347 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.402061   10347 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.402069   10347 system_pods.go:61] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.402078   10347 system_pods.go:74] duration metric: took 82.482524ms to wait for pod list to return data ...
	I1216 02:26:48.402090   10347 default_sa.go:34] waiting for default service account to be created ...
	I1216 02:26:48.404596   10347 default_sa.go:45] found service account: "default"
	I1216 02:26:48.404620   10347 default_sa.go:55] duration metric: took 2.5239ms for default service account to be created ...
	I1216 02:26:48.404631   10347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 02:26:48.407720   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.407747   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.407754   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.407763   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.407771   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.407780   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.407785   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.407792   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.407801   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.407807   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.407827   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.407833   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.407840   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.407853   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.407861   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.407870   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.407883   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.407891   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.407901   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.407908   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.407915   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.407931   10347 retry.go:31] will retry after 209.008811ms: missing components: kube-dns
	I1216 02:26:48.498552   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:48.556427   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:48.560069   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:48.622367   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.622404   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.622415   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.622427   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.622437   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.622479   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.622486   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.622493   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.622499   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.622569   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.622603   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.622763   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.622771   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.622779   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.622788   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.622839   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.622852   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.622861   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.622870   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.622879   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.622893   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.622911   10347 retry.go:31] will retry after 239.273402ms: missing components: kube-dns
	I1216 02:26:48.797779   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:48.867516   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.867552   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.867563   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.867572   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.867588   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.867599   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.867607   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.867613   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.867619   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.867625   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.867634   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.867645   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.867652   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.867664   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.867678   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.867688   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.867699   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.867708   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.867717   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.867729   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.867746   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.867767   10347 retry.go:31] will retry after 364.128275ms: missing components: kube-dns
	I1216 02:26:48.995983   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:49.058327   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:49.060421   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:49.237410   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:49.237441   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:49.237449   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Running
	I1216 02:26:49.237460   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:49.237468   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:49.237479   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:49.237485   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:49.237491   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:49.237496   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:49.237508   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:49.237535   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:49.237545   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:49.237552   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:49.237559   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:49.237567   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:49.237575   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:49.237593   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:49.237603   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:49.237610   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:49.237622   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:49.237627   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Running
	I1216 02:26:49.237638   10347 system_pods.go:126] duration metric: took 832.999889ms to wait for k8s-apps to be running ...
	I1216 02:26:49.237648   10347 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 02:26:49.237695   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:26:49.254921   10347 system_svc.go:56] duration metric: took 17.265442ms WaitForService to wait for kubelet
	I1216 02:26:49.254955   10347 kubeadm.go:587] duration metric: took 42.755146154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:26:49.254981   10347 node_conditions.go:102] verifying NodePressure condition ...
	I1216 02:26:49.258113   10347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 02:26:49.258141   10347 node_conditions.go:123] node cpu capacity is 8
	I1216 02:26:49.258168   10347 node_conditions.go:105] duration metric: took 3.175466ms to run NodePressure ...
	I1216 02:26:49.258188   10347 start.go:242] waiting for startup goroutines ...
	I1216 02:26:49.297253   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:49.496432   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:49.557050   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:49.560734   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:49.797157   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:49.996273   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:50.056143   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:50.060549   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:50.296966   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:50.496855   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:50.556450   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:50.560632   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:50.796627   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:50.995767   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:51.056765   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:51.059777   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:51.296772   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:51.495982   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:51.556958   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:51.559958   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:51.797262   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:51.995598   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:52.056445   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:52.060714   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:52.297219   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:52.496448   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:52.555849   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:52.559876   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:52.831142   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:52.996340   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:53.056180   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.060283   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.297181   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:53.496323   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:53.596576   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.596591   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.796012   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:53.996911   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.056685   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.157443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.297351   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:54.496562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.556313   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.560427   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.796681   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:54.996360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.056015   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.097562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.296617   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.496327   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.555737   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.560460   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.796958   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.996675   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.056537   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.059854   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.296660   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.495883   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.556837   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.559891   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.796653   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.995643   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.056071   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.060172   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.297269   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.497126   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.557240   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.560507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.796548   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.995434   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.056252   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.062151   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.295686   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.495230   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.555722   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.559531   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.796087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.999078   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.055697   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.061766   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.296985   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.496056   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.556749   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.560300   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.796811   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.995853   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.056464   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.060680   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.297094   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.496029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.577006   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.627648   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.797284   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.996046   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.056650   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.059520   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.296294   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.496636   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.556236   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.560262   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.797072   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.996976   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.056768   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.060213   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.297185   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.496913   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.556416   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.560372   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.795998   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.995602   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.055946   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.059878   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.297029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.496363   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.556181   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.560500   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.796435   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.995601   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.056362   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.060506   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.297543   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.495858   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.559014   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.560022   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.796551   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.995365   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.055759   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.059428   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.296224   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.496072   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.556627   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.559492   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.795728   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.996027   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.056528   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.059671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.297221   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.495901   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.556343   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.560068   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.797757   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.995671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.056064   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.060445   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.296671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.495714   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.556250   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.560792   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.796891   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.996517   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.056236   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.060516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.296487   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.495704   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.555777   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.559483   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.796835   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.995747   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.056588   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.059516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.296527   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.495499   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.556073   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.560006   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.796558   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.995467   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.056317   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.060672   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.296914   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.496153   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.555501   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.560291   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.797388   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.995157   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.056245   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.060356   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.296210   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.496278   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.596593   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.596605   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.798624   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.995917   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.056595   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.059792   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:12.297899   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.496196   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.556209   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.560960   10347 kapi.go:107] duration metric: took 1m4.503640416s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 02:27:12.797348   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.996312   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.056119   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.379194   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.496321   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.555998   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.796697   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.995950   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.056267   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.297208   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.496366   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.556309   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.797412   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.995628   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.096079   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:15.296943   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.495663   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.556223   10347 kapi.go:107] duration metric: took 1m7.503412117s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 02:27:15.796629   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.995763   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.296695   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.495771   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.796516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.996213   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.297100   10347 kapi.go:107] duration metric: took 1m2.50362929s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 02:27:17.298145   10347 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-568105 cluster.
	I1216 02:27:17.299254   10347 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 02:27:17.300620   10347 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 02:27:17.497087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.995604   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.496074   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.997036   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.496345   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.996054   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.496652   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.996065   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.496991   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.995906   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.496148   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.996066   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:23.495637   10347 kapi.go:107] duration metric: took 1m15.003275635s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 02:27:23.497271   10347 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, registry-creds, inspektor-gadget, nvidia-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 02:27:23.498377   10347 addons.go:530] duration metric: took 1m16.998540371s for enable addons: enabled=[cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner registry-creds inspektor-gadget nvidia-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 02:27:23.498411   10347 start.go:247] waiting for cluster config update ...
	I1216 02:27:23.498427   10347 start.go:256] writing updated cluster config ...
	I1216 02:27:23.498661   10347 ssh_runner.go:195] Run: rm -f paused
	I1216 02:27:23.502564   10347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:27:23.505236   10347 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cjv67" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.508999   10347 pod_ready.go:94] pod "coredns-66bc5c9577-cjv67" is "Ready"
	I1216 02:27:23.509021   10347 pod_ready.go:86] duration metric: took 3.765345ms for pod "coredns-66bc5c9577-cjv67" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.510593   10347 pod_ready.go:83] waiting for pod "etcd-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.513667   10347 pod_ready.go:94] pod "etcd-addons-568105" is "Ready"
	I1216 02:27:23.513683   10347 pod_ready.go:86] duration metric: took 3.074152ms for pod "etcd-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.515263   10347 pod_ready.go:83] waiting for pod "kube-apiserver-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.518326   10347 pod_ready.go:94] pod "kube-apiserver-addons-568105" is "Ready"
	I1216 02:27:23.518344   10347 pod_ready.go:86] duration metric: took 3.062383ms for pod "kube-apiserver-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.519841   10347 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.905735   10347 pod_ready.go:94] pod "kube-controller-manager-addons-568105" is "Ready"
	I1216 02:27:23.905759   10347 pod_ready.go:86] duration metric: took 385.903954ms for pod "kube-controller-manager-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.106789   10347 pod_ready.go:83] waiting for pod "kube-proxy-plzgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.506138   10347 pod_ready.go:94] pod "kube-proxy-plzgj" is "Ready"
	I1216 02:27:24.506163   10347 pod_ready.go:86] duration metric: took 399.349752ms for pod "kube-proxy-plzgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.707330   10347 pod_ready.go:83] waiting for pod "kube-scheduler-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:25.106642   10347 pod_ready.go:94] pod "kube-scheduler-addons-568105" is "Ready"
	I1216 02:27:25.106669   10347 pod_ready.go:86] duration metric: took 399.31319ms for pod "kube-scheduler-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:25.106680   10347 pod_ready.go:40] duration metric: took 1.604089765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:27:25.150433   10347 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 02:27:25.152995   10347 out.go:179] * Done! kubectl is now configured to use "addons-568105" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.92694305Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-r7sls/POD" id=3323f758-0b0b-4625-9192-7af3caa798c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.927012003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.932869672Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-r7sls Namespace:default ID:ebc49655b4a3b0fc4fe7a60782764f7fe3ee3ad2ca81e7d31496d90b7f9a8374 UID:4d71c0b3-0290-4ca5-8e11-555d270a8d6f NetNS:/var/run/netns/48256c64-8e15-430b-bff4-8a70ce522f92 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00068ea48}] Aliases:map[]}"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.932900709Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-r7sls to CNI network \"kindnet\" (type=ptp)"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.943283507Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-r7sls Namespace:default ID:ebc49655b4a3b0fc4fe7a60782764f7fe3ee3ad2ca81e7d31496d90b7f9a8374 UID:4d71c0b3-0290-4ca5-8e11-555d270a8d6f NetNS:/var/run/netns/48256c64-8e15-430b-bff4-8a70ce522f92 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00068ea48}] Aliases:map[]}"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.943431804Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-r7sls for CNI network kindnet (type=ptp)"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.944289156Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.94508707Z" level=info msg="Ran pod sandbox ebc49655b4a3b0fc4fe7a60782764f7fe3ee3ad2ca81e7d31496d90b7f9a8374 with infra container: default/hello-world-app-5d498dc89-r7sls/POD" id=3323f758-0b0b-4625-9192-7af3caa798c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.946450681Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=77f4788b-f821-4db5-a6b0-5cb81f743351 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.946587312Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=77f4788b-f821-4db5-a6b0-5cb81f743351 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.946625775Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=77f4788b-f821-4db5-a6b0-5cb81f743351 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.947266284Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c2e05920-2d4a-446e-b4a8-a08a8f99989e name=/runtime.v1.ImageService/PullImage
	Dec 16 02:30:03 addons-568105 crio[772]: time="2025-12-16T02:30:03.953418045Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.323582328Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=c2e05920-2d4a-446e-b4a8-a08a8f99989e name=/runtime.v1.ImageService/PullImage
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.324207005Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f388ceed-d8dc-41db-82fe-b316c29ec8b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.325714895Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cf7af6c7-8661-462f-bb96-7a65a86ca719 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.329337732Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-r7sls/hello-world-app" id=6f386974-e2fe-41b1-8260-c1b5dcbc8138 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.32944048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.33466205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.334870681Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/19ea1a638e84415b9357b758f63313d62cd5597b39252ebdd87352a7e344515b/merged/etc/passwd: no such file or directory"
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.334901928Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/19ea1a638e84415b9357b758f63313d62cd5597b39252ebdd87352a7e344515b/merged/etc/group: no such file or directory"
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.335162575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.377501108Z" level=info msg="Created container 67b4aae162be664748bf158f280c18a33ec2511c9c8eba12b8a9301950d5508c: default/hello-world-app-5d498dc89-r7sls/hello-world-app" id=6f386974-e2fe-41b1-8260-c1b5dcbc8138 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.378262666Z" level=info msg="Starting container: 67b4aae162be664748bf158f280c18a33ec2511c9c8eba12b8a9301950d5508c" id=0665b5d0-4f2b-4628-91c9-7e94dca5bb1e name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 02:30:04 addons-568105 crio[772]: time="2025-12-16T02:30:04.380743438Z" level=info msg="Started container" PID=9432 containerID=67b4aae162be664748bf158f280c18a33ec2511c9c8eba12b8a9301950d5508c description=default/hello-world-app-5d498dc89-r7sls/hello-world-app id=0665b5d0-4f2b-4628-91c9-7e94dca5bb1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebc49655b4a3b0fc4fe7a60782764f7fe3ee3ad2ca81e7d31496d90b7f9a8374
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	67b4aae162be6       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   ebc49655b4a3b       hello-world-app-5d498dc89-r7sls             default
	4efff691fcef8       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   8f6190707ee8e       registry-creds-764b6fb674-d6sz6             kube-system
	533f99de65d99       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   8cef5d8cfd865       nginx                                       default
	44e24f2e49438       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   8afd7b5b116ed       busybox                                     default
	5a9662216a426       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7d237fed170b0       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	ae9b9276f546b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7e47591ff7931       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	8136945c1766e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   8c837d91c6fd1       gadget-qf8c2                                gadget
	e1658f146a4d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7f1d5f2b42a6d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   642a2eb535cd1       gcp-auth-78565c9fb4-8dg8c                   gcp-auth
	c6973891ec7ff       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   48ad19d4a1f48       ingress-nginx-controller-85d4c799dd-dwmcj   ingress-nginx
	978c45196be43       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   5d4f062dcb613       registry-proxy-gx76q                        kube-system
	dbded21ce9b6c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	5258c264d4ef1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   1dcf88e800b08       amd-gpu-device-plugin-zpwqw                 kube-system
	f2b1c7c11696c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   c2f1828389bd2       nvidia-device-plugin-daemonset-kzstn        kube-system
	1034828f8f006       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   5833967738d50       snapshot-controller-7d9fbc56b8-zzrsb        kube-system
	51cd2f7227a66       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   e01bac62c5e52       snapshot-controller-7d9fbc56b8-cl5vk        kube-system
	f07eb262fc567       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   d7bf72bb58d79       csi-hostpath-resizer-0                      kube-system
	7f18a86c1651e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             3 minutes ago            Exited              patch                                    1                   36ae70cc61847       ingress-nginx-admission-patch-btk4c         ingress-nginx
	218d4e28f821c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   6bd1a8fedda8d       ingress-nginx-admission-create-b9ppx        ingress-nginx
	c790a5dda1f08       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   72add02616452       csi-hostpath-attacher-0                     kube-system
	777da03eb5c22       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   9e3700b012857       cloud-spanner-emulator-5bdddb765-r5xh9      default
	c3d2e4a1a0c55       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   b83c3c38ebae0       registry-6b586f9694-b7vlw                   kube-system
	b41f823822869       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   ec5fa893ac51b       yakd-dashboard-5ff678cb9-hsz94              yakd-dashboard
	aacc04b82103a       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   d5840fd038606       metrics-server-85b7d694d7-v6wb9             kube-system
	2c63fbd589fcf       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   ade576fb9e8e1       local-path-provisioner-648f6765c9-72tvv     local-path-storage
	4e4882ff4f3f0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   99985094a15f4       kube-ingress-dns-minikube                   kube-system
	df8bdac96f7e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   92dea7f717284       coredns-66bc5c9577-cjv67                    kube-system
	ae4534dbc38ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   03f5001ddf260       storage-provisioner                         kube-system
	4472bad932d44       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago            Running             kube-proxy                               0                   04303bfba66e3       kube-proxy-plzgj                            kube-system
	42bdabbf350a0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   c07898e0059b1       kindnet-7cvb5                               kube-system
	168b7336b0d71       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   7539327b49a77       kube-scheduler-addons-568105                kube-system
	5fc64e9c331d1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   917c69ef5faee       kube-controller-manager-addons-568105       kube-system
	f3d9e1dc84639       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   9c54b75ce508d       etcd-addons-568105                          kube-system
	c1f7c97ecb411       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   5d03c9ae2d4d0       kube-apiserver-addons-568105                kube-system
	
	
	==> coredns [df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae] <==
	[INFO] 10.244.0.21:47224 - 44166 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156328s
	[INFO] 10.244.0.21:45372 - 1821 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006238635s
	[INFO] 10.244.0.21:54502 - 54653 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007296124s
	[INFO] 10.244.0.21:36206 - 36971 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004533083s
	[INFO] 10.244.0.21:48777 - 9670 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004880422s
	[INFO] 10.244.0.21:44170 - 62549 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005346265s
	[INFO] 10.244.0.21:47757 - 8392 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00611276s
	[INFO] 10.244.0.21:47135 - 12200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000745257s
	[INFO] 10.244.0.21:52096 - 18320 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001018088s
	[INFO] 10.244.0.27:42618 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181852s
	[INFO] 10.244.0.27:46602 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128477s
	[INFO] 10.244.0.29:59058 - 39203 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000205069s
	[INFO] 10.244.0.29:40886 - 14670 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000263663s
	[INFO] 10.244.0.29:48896 - 53394 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000112384s
	[INFO] 10.244.0.29:50839 - 5292 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000159775s
	[INFO] 10.244.0.29:36463 - 52086 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000101616s
	[INFO] 10.244.0.29:40304 - 57852 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000121075s
	[INFO] 10.244.0.29:50007 - 41139 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006925576s
	[INFO] 10.244.0.29:44591 - 42390 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007351921s
	[INFO] 10.244.0.29:54528 - 51723 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00588937s
	[INFO] 10.244.0.29:40348 - 34787 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006655144s
	[INFO] 10.244.0.29:59873 - 26694 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003704879s
	[INFO] 10.244.0.29:35416 - 56408 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006132198s
	[INFO] 10.244.0.29:58077 - 13531 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001610413s
	[INFO] 10.244.0.29:58894 - 17347 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002294936s
	
	
	==> describe nodes <==
	Name:               addons-568105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-568105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=addons-568105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T02_26_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-568105
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-568105"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 02:25:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-568105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 02:29:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 02:29:45 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 02:29:45 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 02:29:45 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 02:29:45 +0000   Tue, 16 Dec 2025 02:26:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-568105
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                23aff3bb-760a-437e-a58a-31de8eddbaa4
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  default                     cloud-spanner-emulator-5bdddb765-r5xh9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     hello-world-app-5d498dc89-r7sls              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-qf8c2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  gcp-auth                    gcp-auth-78565c9fb4-8dg8c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-dwmcj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m57s
	  kube-system                 amd-gpu-device-plugin-zpwqw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 coredns-66bc5c9577-cjv67                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m59s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 csi-hostpathplugin-hd2bb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 etcd-addons-568105                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m5s
	  kube-system                 kindnet-7cvb5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m59s
	  kube-system                 kube-apiserver-addons-568105                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-addons-568105        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-proxy-plzgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-scheduler-addons-568105                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 metrics-server-85b7d694d7-v6wb9              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m58s
	  kube-system                 nvidia-device-plugin-daemonset-kzstn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 registry-6b586f9694-b7vlw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 registry-creds-764b6fb674-d6sz6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 registry-proxy-gx76q                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-cl5vk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-zzrsb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  local-path-storage          local-path-provisioner-648f6765c9-72tvv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hsz94               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node addons-568105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node addons-568105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x8 over 4m9s)  kubelet          Node addons-568105 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s                 kubelet          Node addons-568105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s                 kubelet          Node addons-568105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s                 kubelet          Node addons-568105 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m                   node-controller  Node addons-568105 event: Registered Node addons-568105 in Controller
	  Normal  NodeReady                3m18s                kubelet          Node addons-568105 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079] <==
	{"level":"warn","ts":"2025-12-16T02:25:58.166359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.179929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.187163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.193535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.199880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.206295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.213562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.221275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.227267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.233394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.240228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.247144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.252994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.268018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.274691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.281297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.323584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:09.016304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:09.025085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.711913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.718493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.732162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.738319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:27:00.796273Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.707416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:27:00.796444Z","caller":"traceutil/trace.go:172","msg":"trace[1190644849] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:1031; }","duration":"127.88781ms","start":"2025-12-16T02:27:00.668541Z","end":"2025-12-16T02:27:00.796429Z","steps":["trace[1190644849] 'range keys from in-memory index tree'  (duration: 127.644072ms)"],"step_count":1}
	
	
	==> gcp-auth [7f1d5f2b42a6dd6c9827db63b9a36c72d5e9c37d17e0ee2e7879b7e1463a6149] <==
	2025/12/16 02:27:16 GCP Auth Webhook started!
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	2025/12/16 02:27:37 Ready to marshal response ...
	2025/12/16 02:27:37 Ready to write response ...
	2025/12/16 02:27:37 Ready to marshal response ...
	2025/12/16 02:27:37 Ready to write response ...
	2025/12/16 02:27:40 Ready to marshal response ...
	2025/12/16 02:27:40 Ready to write response ...
	2025/12/16 02:27:45 Ready to marshal response ...
	2025/12/16 02:27:45 Ready to write response ...
	2025/12/16 02:27:45 Ready to marshal response ...
	2025/12/16 02:27:45 Ready to write response ...
	2025/12/16 02:27:59 Ready to marshal response ...
	2025/12/16 02:27:59 Ready to write response ...
	2025/12/16 02:28:21 Ready to marshal response ...
	2025/12/16 02:28:21 Ready to write response ...
	2025/12/16 02:30:03 Ready to marshal response ...
	2025/12/16 02:30:03 Ready to write response ...
	
	
	==> kernel <==
	 02:30:05 up 12 min,  0 user,  load average: 0.26, 0.46, 0.23
	Linux addons-568105 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae] <==
	I1216 02:27:57.677392       1 main.go:301] handling current node
	I1216 02:28:07.677922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:07.677957       1 main.go:301] handling current node
	I1216 02:28:17.677992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:17.678027       1 main.go:301] handling current node
	I1216 02:28:27.677452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:27.677491       1 main.go:301] handling current node
	I1216 02:28:37.677348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:37.677379       1 main.go:301] handling current node
	I1216 02:28:47.677205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:47.677232       1 main.go:301] handling current node
	I1216 02:28:57.683358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:28:57.683389       1 main.go:301] handling current node
	I1216 02:29:07.683380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:07.683418       1 main.go:301] handling current node
	I1216 02:29:17.677107       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:17.677137       1 main.go:301] handling current node
	I1216 02:29:27.678058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:27.678092       1 main.go:301] handling current node
	I1216 02:29:37.677576       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:37.677617       1 main.go:301] handling current node
	I1216 02:29:47.677398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:47.677467       1 main.go:301] handling current node
	I1216 02:29:57.683472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:29:57.683511       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed] <==
	E1216 02:26:57.067152       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.144.234:443: connect: connection refused" logger="UnhandledError"
	E1216 02:26:57.068899       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.144.234:443: connect: connection refused" logger="UnhandledError"
	W1216 02:26:58.067173       1 handler_proxy.go:99] no RequestInfo found in the context
	W1216 02:26:58.067173       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 02:26:58.067290       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 02:26:58.067305       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1216 02:26:58.067332       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 02:26:58.068320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 02:26:58.773774       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 02:27:02.080836       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1216 02:27:02.081264       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 02:27:02.081311       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 02:27:02.097857       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 02:27:34.804627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59880: use of closed network connection
	E1216 02:27:34.951323       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59898: use of closed network connection
	I1216 02:27:40.767909       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 02:27:40.974643       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.115.237"}
	I1216 02:28:07.058676       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 02:30:03.683986       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.171.92"}
	
	
	==> kube-controller-manager [5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800] <==
	I1216 02:26:05.690020       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 02:26:05.690101       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 02:26:05.690409       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 02:26:05.690495       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 02:26:05.690532       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 02:26:05.690613       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 02:26:05.691430       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 02:26:05.691454       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 02:26:05.691484       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 02:26:05.692783       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 02:26:05.693316       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 02:26:05.698261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:05.700995       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 02:26:05.702901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:05.706165       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 02:26:05.715782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1216 02:26:07.706008       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 02:26:35.706701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 02:26:35.706869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 02:26:35.706941       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 02:26:35.722524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 02:26:35.726244       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 02:26:35.807635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:35.826934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 02:26:50.625224       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72] <==
	I1216 02:26:08.133354       1 server_linux.go:53] "Using iptables proxy"
	I1216 02:26:08.192955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 02:26:08.293723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 02:26:08.293752       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 02:26:08.293867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 02:26:08.313447       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 02:26:08.313506       1 server_linux.go:132] "Using iptables Proxier"
	I1216 02:26:08.318991       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 02:26:08.323886       1 server.go:527] "Version info" version="v1.34.2"
	I1216 02:26:08.323991       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 02:26:08.325631       1 config.go:200] "Starting service config controller"
	I1216 02:26:08.325655       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 02:26:08.325735       1 config.go:106] "Starting endpoint slice config controller"
	I1216 02:26:08.325767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 02:26:08.325897       1 config.go:309] "Starting node config controller"
	I1216 02:26:08.325915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 02:26:08.325923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 02:26:08.326185       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 02:26:08.326202       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 02:26:08.425850       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 02:26:08.425944       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 02:26:08.426325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b] <==
	E1216 02:25:58.730800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:25:58.730910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 02:25:58.731130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:25:58.731167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 02:25:58.731547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:25:58.731696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 02:25:58.731941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:25:58.731962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:25:58.732073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 02:25:58.732146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:25:58.732202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:25:58.732219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:25:58.732237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:25:58.732290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 02:25:58.732283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:25:58.732320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 02:25:58.732340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 02:25:58.732430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:25:59.623374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:25:59.665003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:25:59.704996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:25:59.715055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 02:25:59.799611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:25:59.844753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1216 02:26:02.429071       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 02:28:22 addons-568105 kubelet[1277]: I1216 02:28:22.462254    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.462236039 podStartE2EDuration="1.462236039s" podCreationTimestamp="2025-12-16 02:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:28:22.461931469 +0000 UTC m=+141.668524965" watchObservedRunningTime="2025-12-16 02:28:22.462236039 +0000 UTC m=+141.668829535"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.198691    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbh4d\" (UniqueName: \"kubernetes.io/projected/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-kube-api-access-xbh4d\") pod \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\" (UID: \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\") "
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.198873    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e3b1cdd1-da26-11f0-bfae-f6a1d8b2e930\") pod \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\" (UID: \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\") "
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.198941    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-gcp-creds\") pod \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\" (UID: \"ebbc284d-7fa0-4f34-9861-f76ff27c9a6a\") "
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.199075    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a" (UID: "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.201246    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-kube-api-access-xbh4d" (OuterVolumeSpecName: "kube-api-access-xbh4d") pod "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a" (UID: "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a"). InnerVolumeSpecName "kube-api-access-xbh4d". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.202855    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^e3b1cdd1-da26-11f0-bfae-f6a1d8b2e930" (OuterVolumeSpecName: "task-pv-storage") pod "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a" (UID: "ebbc284d-7fa0-4f34-9861-f76ff27c9a6a"). InnerVolumeSpecName "pvc-7d5eb364-797e-4af6-9b07-fcacfeff9c75". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.299698    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-gcp-creds\") on node \"addons-568105\" DevicePath \"\""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.299726    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbh4d\" (UniqueName: \"kubernetes.io/projected/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a-kube-api-access-xbh4d\") on node \"addons-568105\" DevicePath \"\""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.299758    1277 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-7d5eb364-797e-4af6-9b07-fcacfeff9c75\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e3b1cdd1-da26-11f0-bfae-f6a1d8b2e930\") on node \"addons-568105\" "
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.303786    1277 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-7d5eb364-797e-4af6-9b07-fcacfeff9c75" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^e3b1cdd1-da26-11f0-bfae-f6a1d8b2e930") on node "addons-568105"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.400970    1277 reconciler_common.go:299] "Volume detached for volume \"pvc-7d5eb364-797e-4af6-9b07-fcacfeff9c75\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e3b1cdd1-da26-11f0-bfae-f6a1d8b2e930\") on node \"addons-568105\" DevicePath \"\""
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.475805    1277 scope.go:117] "RemoveContainer" containerID="de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.489361    1277 scope.go:117] "RemoveContainer" containerID="de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: E1216 02:28:28.489794    1277 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049\": container with ID starting with de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049 not found: ID does not exist" containerID="de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.489960    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049"} err="failed to get container status \"de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049\": rpc error: code = NotFound desc = could not find container \"de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049\": container with ID starting with de84d1f702b7233202ec4a380c7dd7b61ca8c0531f7955d8326540a5f590d049 not found: ID does not exist"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.883413    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gx76q" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:28:28 addons-568105 kubelet[1277]: I1216 02:28:28.886783    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbc284d-7fa0-4f34-9861-f76ff27c9a6a" path="/var/lib/kubelet/pods/ebbc284d-7fa0-4f34-9861-f76ff27c9a6a/volumes"
	Dec 16 02:28:32 addons-568105 kubelet[1277]: I1216 02:28:32.882639    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zpwqw" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:29:26 addons-568105 kubelet[1277]: I1216 02:29:26.882876    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kzstn" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:29:37 addons-568105 kubelet[1277]: I1216 02:29:37.882930    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gx76q" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:29:59 addons-568105 kubelet[1277]: I1216 02:29:59.883196    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zpwqw" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:30:03 addons-568105 kubelet[1277]: I1216 02:30:03.677767    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh2z\" (UniqueName: \"kubernetes.io/projected/4d71c0b3-0290-4ca5-8e11-555d270a8d6f-kube-api-access-frh2z\") pod \"hello-world-app-5d498dc89-r7sls\" (UID: \"4d71c0b3-0290-4ca5-8e11-555d270a8d6f\") " pod="default/hello-world-app-5d498dc89-r7sls"
	Dec 16 02:30:03 addons-568105 kubelet[1277]: I1216 02:30:03.677833    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4d71c0b3-0290-4ca5-8e11-555d270a8d6f-gcp-creds\") pod \"hello-world-app-5d498dc89-r7sls\" (UID: \"4d71c0b3-0290-4ca5-8e11-555d270a8d6f\") " pod="default/hello-world-app-5d498dc89-r7sls"
	Dec 16 02:30:04 addons-568105 kubelet[1277]: I1216 02:30:04.843492    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-r7sls" podStartSLOduration=1.465202328 podStartE2EDuration="1.843471714s" podCreationTimestamp="2025-12-16 02:30:03 +0000 UTC" firstStartedPulling="2025-12-16 02:30:03.946922637 +0000 UTC m=+243.153516112" lastFinishedPulling="2025-12-16 02:30:04.325192023 +0000 UTC m=+243.531785498" observedRunningTime="2025-12-16 02:30:04.843367583 +0000 UTC m=+244.049961080" watchObservedRunningTime="2025-12-16 02:30:04.843471714 +0000 UTC m=+244.050065209"
	
	
	==> storage-provisioner [ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b] <==
	W1216 02:29:41.207223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:43.209990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:43.213586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:45.216360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:45.220709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:47.223973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:47.227524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:49.231092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:49.234709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:51.237496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:51.241111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:53.243799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:53.248386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:55.251707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:55.255254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:57.258207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:57.261778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:59.264993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:29:59.268293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:01.270893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:01.274428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:03.277190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:03.280865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:05.284140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:05.287714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-568105 -n addons-568105
helpers_test.go:270: (dbg) Run:  kubectl --context addons-568105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c: exit status 1 (58.085134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b9ppx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-btk4c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (241.6398ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:30:06.162240   24568 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:30:06.162543   24568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:06.162554   24568 out.go:374] Setting ErrFile to fd 2...
	I1216 02:30:06.162558   24568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:06.162732   24568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:30:06.163001   24568 mustload.go:66] Loading cluster: addons-568105
	I1216 02:30:06.163311   24568 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:30:06.163330   24568 addons.go:622] checking whether the cluster is paused
	I1216 02:30:06.163408   24568 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:30:06.163419   24568 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:30:06.163755   24568 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:30:06.181520   24568 ssh_runner.go:195] Run: systemctl --version
	I1216 02:30:06.181580   24568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:30:06.198919   24568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:30:06.295659   24568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:30:06.295725   24568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:30:06.325647   24568 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:30:06.325675   24568 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:30:06.325682   24568 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:30:06.325689   24568 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:30:06.325694   24568 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:30:06.325702   24568 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:30:06.325708   24568 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:30:06.325714   24568 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:30:06.325720   24568 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:30:06.325742   24568 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:30:06.325753   24568 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:30:06.325761   24568 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:30:06.325764   24568 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:30:06.325770   24568 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:30:06.325773   24568 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:30:06.325778   24568 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:30:06.325784   24568 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:30:06.325789   24568 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:30:06.325791   24568 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:30:06.325795   24568 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:30:06.325801   24568 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:30:06.325811   24568 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:30:06.325832   24568 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:30:06.325840   24568 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:30:06.325855   24568 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:30:06.325864   24568 cri.go:89] found id: ""
	I1216 02:30:06.325922   24568 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:30:06.339607   24568 out.go:203] 
	W1216 02:30:06.340862   24568 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:30:06.340883   24568 out.go:285] * 
	* 
	W1216 02:30:06.344014   24568 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:30:06.345108   24568 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.954948ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:30:06.405986   24633 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:30:06.406307   24633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:06.406318   24633 out.go:374] Setting ErrFile to fd 2...
	I1216 02:30:06.406325   24633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:06.406514   24633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:30:06.406796   24633 mustload.go:66] Loading cluster: addons-568105
	I1216 02:30:06.407161   24633 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:30:06.407184   24633 addons.go:622] checking whether the cluster is paused
	I1216 02:30:06.407291   24633 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:30:06.407308   24633 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:30:06.407686   24633 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:30:06.426537   24633 ssh_runner.go:195] Run: systemctl --version
	I1216 02:30:06.426601   24633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:30:06.445419   24633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:30:06.541494   24633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:30:06.541568   24633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:30:06.568633   24633 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:30:06.568654   24633 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:30:06.568660   24633 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:30:06.568664   24633 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:30:06.568669   24633 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:30:06.568674   24633 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:30:06.568678   24633 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:30:06.568682   24633 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:30:06.568686   24633 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:30:06.568695   24633 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:30:06.568699   24633 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:30:06.568704   24633 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:30:06.568708   24633 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:30:06.568713   24633 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:30:06.568718   24633 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:30:06.568734   24633 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:30:06.568743   24633 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:30:06.568749   24633 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:30:06.568753   24633 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:30:06.568757   24633 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:30:06.568760   24633 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:30:06.568764   24633 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:30:06.568768   24633 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:30:06.568773   24633 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:30:06.568777   24633 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:30:06.568783   24633 cri.go:89] found id: ""
	I1216 02:30:06.568858   24633 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:30:06.582782   24633 out.go:203] 
	W1216 02:30:06.584113   24633 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:30:06.584134   24633 out.go:285] * 
	* 
	W1216 02:30:06.587125   24633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:30:06.588362   24633 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-qf8c2" [ccbc8ed4-34eb-430a-9bb6-68f03a3a4065] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003185976s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (241.83782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:51.809553   21373 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:51.809828   21373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:51.809838   21373 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:51.809842   21373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:51.810083   21373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:51.810426   21373 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:51.810768   21373 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:51.810789   21373 addons.go:622] checking whether the cluster is paused
	I1216 02:27:51.810901   21373 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:51.810919   21373 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:51.811316   21373 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:51.828611   21373 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:51.828669   21373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:51.848770   21373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:51.947672   21373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:51.947747   21373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:51.976000   21373 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:51.976027   21373 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:51.976034   21373 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:51.976039   21373 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:51.976043   21373 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:51.976047   21373 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:51.976050   21373 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:51.976053   21373 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:51.976055   21373 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:51.976069   21373 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:51.976074   21373 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:51.976078   21373 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:51.976083   21373 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:51.976094   21373 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:51.976099   21373 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:51.976110   21373 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:51.976114   21373 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:51.976120   21373 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:51.976133   21373 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:51.976141   21373 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:51.976145   21373 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:51.976150   21373 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:51.976153   21373 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:51.976156   21373 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:51.976159   21373 cri.go:89] found id: ""
	I1216 02:27:51.976194   21373 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:51.990699   21373 out.go:203] 
	W1216 02:27:51.991897   21373 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:51.991914   21373 out.go:285] * 
	* 
	W1216 02:27:51.994762   21373 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:51.995983   21373 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.84124ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002153787s
addons_test.go:465: (dbg) Run:  kubectl --context addons-568105 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (254.258634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:40.332290   19492 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:40.332448   19492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:40.332458   19492 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:40.332464   19492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:40.332652   19492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:40.332970   19492 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:40.333311   19492 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:40.333332   19492 addons.go:622] checking whether the cluster is paused
	I1216 02:27:40.333426   19492 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:40.333441   19492 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:40.333852   19492 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:40.352243   19492 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:40.352295   19492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:40.370433   19492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:40.466801   19492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:40.466908   19492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:40.494785   19492 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:40.494803   19492 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:40.494808   19492 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:40.494811   19492 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:40.494814   19492 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:40.494842   19492 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:40.494847   19492 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:40.494851   19492 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:40.494857   19492 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:40.494877   19492 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:40.494883   19492 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:40.494886   19492 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:40.494889   19492 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:40.494892   19492 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:40.494895   19492 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:40.494899   19492 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:40.494904   19492 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:40.494909   19492 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:40.494913   19492 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:40.494918   19492 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:40.494926   19492 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:40.494930   19492 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:40.494935   19492 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:40.494942   19492 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:40.494947   19492 cri.go:89] found id: ""
	I1216 02:27:40.494989   19492 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:40.508390   19492 out.go:203] 
	W1216 02:27:40.509654   19492 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:40.509682   19492 out.go:285] * 
	* 
	W1216 02:27:40.512619   19492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:40.513846   19492 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 02:27:46.701855    8586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 02:27:46.704480    8586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 02:27:46.704503    8586 kapi.go:107] duration metric: took 2.65998ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 2.670688ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-568105 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-568105 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [1434bd2f-c484-47ee-b1ff-de12072b6c11] Pending
helpers_test.go:353: "task-pv-pod" [1434bd2f-c484-47ee-b1ff-de12072b6c11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [1434bd2f-c484-47ee-b1ff-de12072b6c11] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00392001s
addons_test.go:574: (dbg) Run:  kubectl --context addons-568105 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-568105 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-568105 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-568105 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-568105 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-568105 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-568105 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [ebbc284d-7fa0-4f34-9861-f76ff27c9a6a] Pending
helpers_test.go:353: "task-pv-pod-restore" [ebbc284d-7fa0-4f34-9861-f76ff27c9a6a] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003570094s
addons_test.go:616: (dbg) Run:  kubectl --context addons-568105 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-568105 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-568105 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (242.882219ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:28:28.874035   22523 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:28:28.874355   22523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:28:28.874366   22523 out.go:374] Setting ErrFile to fd 2...
	I1216 02:28:28.874370   22523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:28:28.874563   22523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:28:28.874831   22523 mustload.go:66] Loading cluster: addons-568105
	I1216 02:28:28.875187   22523 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:28:28.875214   22523 addons.go:622] checking whether the cluster is paused
	I1216 02:28:28.875333   22523 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:28:28.875347   22523 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:28:28.875808   22523 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:28:28.895391   22523 ssh_runner.go:195] Run: systemctl --version
	I1216 02:28:28.895443   22523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:28:28.912917   22523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:28:29.009385   22523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:28:29.009482   22523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:28:29.038710   22523 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:28:29.038728   22523 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:28:29.038731   22523 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:28:29.038735   22523 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:28:29.038737   22523 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:28:29.038741   22523 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:28:29.038743   22523 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:28:29.038746   22523 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:28:29.038749   22523 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:28:29.038759   22523 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:28:29.038761   22523 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:28:29.038764   22523 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:28:29.038766   22523 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:28:29.038769   22523 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:28:29.038772   22523 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:28:29.038778   22523 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:28:29.038781   22523 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:28:29.038786   22523 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:28:29.038789   22523 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:28:29.038791   22523 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:28:29.038794   22523 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:28:29.038797   22523 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:28:29.038799   22523 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:28:29.038802   22523 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:28:29.038804   22523 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:28:29.038807   22523 cri.go:89] found id: ""
	I1216 02:28:29.038866   22523 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:28:29.052972   22523 out.go:203] 
	W1216 02:28:29.054294   22523 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:28:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:28:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:28:29.054314   22523 out.go:285] * 
	* 
	W1216 02:28:29.057379   22523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:28:29.058635   22523 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (237.681422ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:28:29.116603   22585 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:28:29.116747   22585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:28:29.116758   22585 out.go:374] Setting ErrFile to fd 2...
	I1216 02:28:29.116762   22585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:28:29.116985   22585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:28:29.117269   22585 mustload.go:66] Loading cluster: addons-568105
	I1216 02:28:29.117613   22585 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:28:29.117632   22585 addons.go:622] checking whether the cluster is paused
	I1216 02:28:29.117720   22585 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:28:29.117736   22585 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:28:29.118594   22585 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:28:29.136214   22585 ssh_runner.go:195] Run: systemctl --version
	I1216 02:28:29.136267   22585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:28:29.153969   22585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:28:29.250265   22585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:28:29.250345   22585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:28:29.278353   22585 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:28:29.278377   22585 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:28:29.278382   22585 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:28:29.278386   22585 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:28:29.278388   22585 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:28:29.278392   22585 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:28:29.278397   22585 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:28:29.278401   22585 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:28:29.278407   22585 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:28:29.278415   22585 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:28:29.278426   22585 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:28:29.278431   22585 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:28:29.278438   22585 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:28:29.278441   22585 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:28:29.278444   22585 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:28:29.278452   22585 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:28:29.278457   22585 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:28:29.278462   22585 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:28:29.278464   22585 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:28:29.278467   22585 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:28:29.278472   22585 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:28:29.278475   22585 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:28:29.278477   22585 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:28:29.278480   22585 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:28:29.278482   22585 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:28:29.278485   22585 cri.go:89] found id: ""
	I1216 02:28:29.278542   22585 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:28:29.292144   22585 out.go:203] 
	W1216 02:28:29.293127   22585 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:28:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:28:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:28:29.293145   22585 out.go:285] * 
	* 
	W1216 02:28:29.296071   22585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:28:29.297010   22585 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-568105 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-568105 --alsologtostderr -v=1: exit status 11 (237.150661ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:35.252430   18523 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:35.252570   18523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:35.252579   18523 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:35.252584   18523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:35.252833   18523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:35.253118   18523 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:35.253484   18523 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:35.253505   18523 addons.go:622] checking whether the cluster is paused
	I1216 02:27:35.253598   18523 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:35.253614   18523 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:35.254049   18523 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:35.271610   18523 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:35.271669   18523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:35.288339   18523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:35.383491   18523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:35.383564   18523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:35.410351   18523 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:35.410369   18523 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:35.410374   18523 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:35.410377   18523 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:35.410393   18523 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:35.410397   18523 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:35.410400   18523 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:35.410403   18523 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:35.410405   18523 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:35.410411   18523 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:35.410414   18523 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:35.410416   18523 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:35.410425   18523 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:35.410431   18523 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:35.410434   18523 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:35.410446   18523 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:35.410454   18523 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:35.410457   18523 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:35.410459   18523 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:35.410462   18523 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:35.410468   18523 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:35.410471   18523 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:35.410473   18523 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:35.410476   18523 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:35.410478   18523 cri.go:89] found id: ""
	I1216 02:27:35.410517   18523 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:35.424472   18523 out.go:203] 
	W1216 02:27:35.425584   18523 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:35.425607   18523 out.go:285] * 
	* 
	W1216 02:27:35.428539   18523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:35.429653   18523 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-568105 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-568105
helpers_test.go:244: (dbg) docker inspect addons-568105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50",
	        "Created": "2025-12-16T02:25:45.591874719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11001,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:25:45.637033839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/hostname",
	        "HostsPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/hosts",
	        "LogPath": "/var/lib/docker/containers/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50/128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50-json.log",
	        "Name": "/addons-568105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-568105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-568105",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "128ae821ecc5b7b85fc1e8cd4da177f4de25ebc0633051cd81740af30648ad50",
	                "LowerDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/merged",
	                "UpperDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/diff",
	                "WorkDir": "/var/lib/docker/overlay2/657c78810fafbd0d45b3883862d96b306c01b79400f8065b8d6e290d67a8c089/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-568105",
	                "Source": "/var/lib/docker/volumes/addons-568105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-568105",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-568105",
	                "name.minikube.sigs.k8s.io": "addons-568105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "90190998bf440cbf3358c33fe0e1c32414ed2292d54af7e3d435caafa41d08a6",
	            "SandboxKey": "/var/run/docker/netns/90190998bf44",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-568105": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4d6946230c80378c13379f681aa4fc160da58c2330823871ef9d83121c1c0ec",
	                    "EndpointID": "1467eb93e9ce1c114b4079e0f13eaf54e3d8e07071eb8337021f2d2271312101",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:ad:f9:4e:a6:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-568105",
	                        "128ae821ecc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-568105 -n addons-568105
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-568105 logs -n 25: (1.072622093s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-407168   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-407168                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-407168   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-217377 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-217377   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-217377                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-217377   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-388456 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-388456   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-388456                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-388456   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-407168                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-407168   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-217377                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-217377   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-388456                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-388456   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ --download-only -p download-docker-622909 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-622909 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ -p download-docker-622909                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-622909 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ --download-only -p binary-mirror-346468 --alsologtostderr --binary-mirror http://127.0.0.1:41995 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-346468   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ -p binary-mirror-346468                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-346468   │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ addons  │ enable dashboard -p addons-568105                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-568105                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ start   │ -p addons-568105 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:27 UTC │
	│ addons  │ addons-568105 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-568105 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-568105 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-568105          │ jenkins │ v1.37.0 │ 16 Dec 25 02:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:21.930987   10347 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:21.931081   10347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:21.931088   10347 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:21.931094   10347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:21.931271   10347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:25:21.931746   10347 out.go:368] Setting JSON to false
	I1216 02:25:21.932615   10347 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":474,"bootTime":1765851448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:21.932668   10347 start.go:143] virtualization: kvm guest
	I1216 02:25:21.934743   10347 out.go:179] * [addons-568105] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:21.935885   10347 notify.go:221] Checking for updates...
	I1216 02:25:21.935934   10347 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:25:21.937167   10347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:21.938606   10347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:25:21.940006   10347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:25:21.941230   10347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:25:21.942303   10347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:25:21.943564   10347 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:21.966291   10347 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:25:21.966394   10347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:22.017592   10347 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 02:25:22.007805985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:22.017686   10347 docker.go:319] overlay module found
	I1216 02:25:22.020011   10347 out.go:179] * Using the docker driver based on user configuration
	I1216 02:25:22.021067   10347 start.go:309] selected driver: docker
	I1216 02:25:22.021083   10347 start.go:927] validating driver "docker" against <nil>
	I1216 02:25:22.021094   10347 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:25:22.021575   10347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:22.074000   10347 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 02:25:22.065106704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:22.074178   10347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:22.074414   10347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:25:22.075929   10347 out.go:179] * Using Docker driver with root privileges
	I1216 02:25:22.077057   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:25:22.077119   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:25:22.077130   10347 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:25:22.077188   10347 start.go:353] cluster config:
	{Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 02:25:22.078380   10347 out.go:179] * Starting "addons-568105" primary control-plane node in "addons-568105" cluster
	I1216 02:25:22.079338   10347 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 02:25:22.080352   10347 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:25:22.081403   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:22.081441   10347 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:25:22.081449   10347 cache.go:65] Caching tarball of preloaded images
	I1216 02:25:22.081503   10347 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:25:22.081533   10347 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 02:25:22.081541   10347 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 02:25:22.081898   10347 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json ...
	I1216 02:25:22.081930   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json: {Name:mk21419428632a34a499f735ccdc8529f44bed77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:22.097619   10347 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:25:22.097734   10347 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 02:25:22.097752   10347 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory, skipping pull
	I1216 02:25:22.097756   10347 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in cache, skipping pull
	I1216 02:25:22.097763   10347 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb as a tarball
	I1216 02:25:22.097770   10347 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb from local cache
	I1216 02:25:35.517407   10347 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb from cached tarball
	I1216 02:25:35.517448   10347 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:25:35.517488   10347 start.go:360] acquireMachinesLock for addons-568105: {Name:mkff1bc43d5ab769de8a955435d1e20ee0b29deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:25:35.517599   10347 start.go:364] duration metric: took 91.641µs to acquireMachinesLock for "addons-568105"
	I1216 02:25:35.517624   10347 start.go:93] Provisioning new machine with config: &{Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:25:35.517712   10347 start.go:125] createHost starting for "" (driver="docker")
	I1216 02:25:35.519526   10347 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 02:25:35.519843   10347 start.go:159] libmachine.API.Create for "addons-568105" (driver="docker")
	I1216 02:25:35.519883   10347 client.go:173] LocalClient.Create starting
	I1216 02:25:35.520024   10347 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 02:25:35.632190   10347 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 02:25:35.862336   10347 cli_runner.go:164] Run: docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 02:25:35.879945   10347 cli_runner.go:211] docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 02:25:35.880012   10347 network_create.go:284] running [docker network inspect addons-568105] to gather additional debugging logs...
	I1216 02:25:35.880030   10347 cli_runner.go:164] Run: docker network inspect addons-568105
	W1216 02:25:35.895844   10347 cli_runner.go:211] docker network inspect addons-568105 returned with exit code 1
	I1216 02:25:35.895877   10347 network_create.go:287] error running [docker network inspect addons-568105]: docker network inspect addons-568105: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-568105 not found
	I1216 02:25:35.895899   10347 network_create.go:289] output of [docker network inspect addons-568105]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-568105 not found
	
	** /stderr **
	I1216 02:25:35.896033   10347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:25:35.912775   10347 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000aff220}
	I1216 02:25:35.912809   10347 network_create.go:124] attempt to create docker network addons-568105 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 02:25:35.912877   10347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-568105 addons-568105
	I1216 02:25:35.958030   10347 network_create.go:108] docker network addons-568105 192.168.49.0/24 created
	I1216 02:25:35.958060   10347 kic.go:121] calculated static IP "192.168.49.2" for the "addons-568105" container
	I1216 02:25:35.958128   10347 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 02:25:35.973295   10347 cli_runner.go:164] Run: docker volume create addons-568105 --label name.minikube.sigs.k8s.io=addons-568105 --label created_by.minikube.sigs.k8s.io=true
	I1216 02:25:35.991522   10347 oci.go:103] Successfully created a docker volume addons-568105
	I1216 02:25:35.991602   10347 cli_runner.go:164] Run: docker run --rm --name addons-568105-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --entrypoint /usr/bin/test -v addons-568105:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 02:25:41.738962   10347 cli_runner.go:217] Completed: docker run --rm --name addons-568105-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --entrypoint /usr/bin/test -v addons-568105:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib: (5.747315106s)
	I1216 02:25:41.738991   10347 oci.go:107] Successfully prepared a docker volume addons-568105
	I1216 02:25:41.739034   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:41.739047   10347 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 02:25:41.739121   10347 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-568105:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 02:25:45.525444   10347 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-568105:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.786256623s)
	I1216 02:25:45.525480   10347 kic.go:203] duration metric: took 3.786428316s to extract preloaded images to volume ...
	W1216 02:25:45.525574   10347 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 02:25:45.525606   10347 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 02:25:45.525642   10347 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 02:25:45.576400   10347 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-568105 --name addons-568105 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-568105 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-568105 --network addons-568105 --ip 192.168.49.2 --volume addons-568105:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 02:25:45.875699   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Running}}
	I1216 02:25:45.895729   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:45.913183   10347 cli_runner.go:164] Run: docker exec addons-568105 stat /var/lib/dpkg/alternatives/iptables
	I1216 02:25:45.958954   10347 oci.go:144] the created container "addons-568105" has a running status.
	I1216 02:25:45.958986   10347 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa...
	I1216 02:25:46.062281   10347 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 02:25:46.085898   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:46.104978   10347 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 02:25:46.105003   10347 kic_runner.go:114] Args: [docker exec --privileged addons-568105 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 02:25:46.153559   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:25:46.177713   10347 machine.go:94] provisionDockerMachine start ...
	I1216 02:25:46.177812   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:46.202110   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:46.202453   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:46.202474   10347 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:25:46.203799   10347 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44284->127.0.0.1:32768: read: connection reset by peer
	I1216 02:25:49.339555   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-568105
	
	I1216 02:25:49.339582   10347 ubuntu.go:182] provisioning hostname "addons-568105"
	I1216 02:25:49.339636   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.358553   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.358762   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.358775   10347 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-568105 && echo "addons-568105" | sudo tee /etc/hostname
	I1216 02:25:49.500259   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-568105
	
	I1216 02:25:49.500340   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.518189   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.518395   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.518412   10347 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-568105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-568105/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-568105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:25:49.652285   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:25:49.652317   10347 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 02:25:49.652347   10347 ubuntu.go:190] setting up certificates
	I1216 02:25:49.652365   10347 provision.go:84] configureAuth start
	I1216 02:25:49.652411   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:49.669378   10347 provision.go:143] copyHostCerts
	I1216 02:25:49.669441   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 02:25:49.669541   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 02:25:49.669603   10347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 02:25:49.669651   10347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.addons-568105 san=[127.0.0.1 192.168.49.2 addons-568105 localhost minikube]
	I1216 02:25:49.729533   10347 provision.go:177] copyRemoteCerts
	I1216 02:25:49.729585   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:25:49.729632   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.746468   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:49.842779   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 02:25:49.861132   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:25:49.877314   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 02:25:49.893243   10347 provision.go:87] duration metric: took 240.860769ms to configureAuth
	I1216 02:25:49.893267   10347 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:25:49.893411   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:25:49.893502   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:49.910832   10347 main.go:143] libmachine: Using SSH client type: native
	I1216 02:25:49.911052   10347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 02:25:49.911072   10347 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 02:25:50.172231   10347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 02:25:50.172257   10347 machine.go:97] duration metric: took 3.994517599s to provisionDockerMachine
	I1216 02:25:50.172269   10347 client.go:176] duration metric: took 14.652376853s to LocalClient.Create
	I1216 02:25:50.172289   10347 start.go:167] duration metric: took 14.652449708s to libmachine.API.Create "addons-568105"
	I1216 02:25:50.172299   10347 start.go:293] postStartSetup for "addons-568105" (driver="docker")
	I1216 02:25:50.172311   10347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:25:50.172370   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:25:50.172415   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.189371   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.287514   10347 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:25:50.290710   10347 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:25:50.290751   10347 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:25:50.290765   10347 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 02:25:50.290836   10347 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 02:25:50.290869   10347 start.go:296] duration metric: took 118.56362ms for postStartSetup
	I1216 02:25:50.291126   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:50.309296   10347 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/config.json ...
	I1216 02:25:50.309545   10347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:25:50.309584   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.326207   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.418809   10347 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:25:50.423232   10347 start.go:128] duration metric: took 14.905503973s to createHost
	I1216 02:25:50.423256   10347 start.go:83] releasing machines lock for "addons-568105", held for 14.905644563s
	I1216 02:25:50.423331   10347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-568105
	I1216 02:25:50.440177   10347 ssh_runner.go:195] Run: cat /version.json
	I1216 02:25:50.440231   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.440286   10347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:25:50.440350   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:25:50.457071   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.458632   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:25:50.604878   10347 ssh_runner.go:195] Run: systemctl --version
	I1216 02:25:50.611079   10347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 02:25:50.644095   10347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:25:50.648433   10347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:25:50.648489   10347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:25:50.672260   10347 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 02:25:50.672278   10347 start.go:496] detecting cgroup driver to use...
	I1216 02:25:50.672304   10347 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 02:25:50.672343   10347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 02:25:50.687302   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 02:25:50.698743   10347 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:25:50.698796   10347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:25:50.714050   10347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:25:50.730017   10347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:25:50.803613   10347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:25:50.889875   10347 docker.go:234] disabling docker service ...
	I1216 02:25:50.889938   10347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:25:50.907223   10347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:25:50.918886   10347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:25:50.998352   10347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:25:51.076588   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:25:51.088293   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:25:51.101953   10347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 02:25:51.102008   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.111472   10347 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 02:25:51.111519   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.119625   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.127363   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.135386   10347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:25:51.142890   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.150629   10347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.163097   10347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:25:51.171090   10347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:25:51.177890   10347 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 02:25:51.177939   10347 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 02:25:51.189329   10347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:25:51.196539   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:25:51.273292   10347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 02:25:51.407013   10347 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 02:25:51.407092   10347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 02:25:51.410879   10347 start.go:564] Will wait 60s for crictl version
	I1216 02:25:51.410946   10347 ssh_runner.go:195] Run: which crictl
	I1216 02:25:51.414388   10347 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:25:51.438696   10347 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 02:25:51.438811   10347 ssh_runner.go:195] Run: crio --version
	I1216 02:25:51.465175   10347 ssh_runner.go:195] Run: crio --version
	I1216 02:25:51.493237   10347 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 02:25:51.494534   10347 cli_runner.go:164] Run: docker network inspect addons-568105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:25:51.512979   10347 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:25:51.516951   10347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:25:51.526490   10347 kubeadm.go:884] updating cluster {Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:25:51.526613   10347 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:51.526657   10347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:25:51.554313   10347 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:25:51.554335   10347 crio.go:433] Images already preloaded, skipping extraction
	I1216 02:25:51.554389   10347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:25:51.577952   10347 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:25:51.577978   10347 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:25:51.577986   10347 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 02:25:51.578074   10347 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-568105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:25:51.578133   10347 ssh_runner.go:195] Run: crio config
	I1216 02:25:51.620692   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:25:51.620722   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:25:51.620744   10347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:25:51.620766   10347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-568105 NodeName:addons-568105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:25:51.620917   10347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-568105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:25:51.620984   10347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 02:25:51.629112   10347 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:25:51.629178   10347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:25:51.636711   10347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 02:25:51.648482   10347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 02:25:51.663072   10347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 02:25:51.674801   10347 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:25:51.678229   10347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:25:51.687431   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:25:51.765677   10347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:25:51.789150   10347 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105 for IP: 192.168.49.2
	I1216 02:25:51.789173   10347 certs.go:195] generating shared ca certs ...
	I1216 02:25:51.789191   10347 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.789344   10347 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 02:25:51.903348   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt ...
	I1216 02:25:51.903377   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt: {Name:mka3bd05f062522bac970d87e69a6f4541c67945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.903577   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key ...
	I1216 02:25:51.903592   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key: {Name:mk6c16b6cf95261037ec88d060ec3f6c89fbea36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.903699   10347 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 02:25:51.962269   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt ...
	I1216 02:25:51.962295   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt: {Name:mk881062a9d4092bfcf46f29ecf2d3c3cbf1d6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.962459   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key ...
	I1216 02:25:51.962469   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key: {Name:mk85c89aeac918c8ed9e2f62e347511843d6bb33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.962542   10347 certs.go:257] generating profile certs ...
	I1216 02:25:51.962599   10347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key
	I1216 02:25:51.962613   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt with IP's: []
	I1216 02:25:51.990650   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt ...
	I1216 02:25:51.990675   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: {Name:mk89d973e054d2af0d0d12fa72da63d7b7cc951c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.990854   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key ...
	I1216 02:25:51.990865   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.key: {Name:mk23ce2e5798b14f25ddc24f8ad21860e4d2d95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:51.990938   10347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c
	I1216 02:25:51.990958   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 02:25:52.204013   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c ...
	I1216 02:25:52.204041   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c: {Name:mk6eac4c01d5db7800a0de5ec0cd6c917cf0a3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.204195   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c ...
	I1216 02:25:52.204208   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c: {Name:mkcf6edc0553dad82dfe4abad1fca12f2e8af338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.204294   10347 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt.7dde552c -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt
	I1216 02:25:52.204389   10347 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key.7dde552c -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key
	I1216 02:25:52.204446   10347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key
	I1216 02:25:52.204465   10347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt with IP's: []
	I1216 02:25:52.285957   10347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt ...
	I1216 02:25:52.285984   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt: {Name:mk39bcf943bc32d6118697cd1443c5bf53423ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.286142   10347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key ...
	I1216 02:25:52.286152   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key: {Name:mk333ddc04925678bf1d04fd5cf85be03a1194f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:25:52.286333   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 02:25:52.286368   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 02:25:52.286393   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:25:52.286430   10347 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 02:25:52.287001   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:25:52.304761   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 02:25:52.320927   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:25:52.336881   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:25:52.353099   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 02:25:52.369121   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:25:52.385237   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:25:52.401076   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:25:52.416676   10347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:25:52.434375   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:25:52.445831   10347 ssh_runner.go:195] Run: openssl version
	I1216 02:25:52.451688   10347 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.458523   10347 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:25:52.467351   10347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.470570   10347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.470610   10347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:25:52.504227   10347 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:25:52.512063   10347 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 02:25:52.518936   10347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:25:52.522277   10347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 02:25:52.522322   10347 kubeadm.go:401] StartCluster: {Name:addons-568105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-568105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:25:52.522401   10347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:25:52.522448   10347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:25:52.547576   10347 cri.go:89] found id: ""
	I1216 02:25:52.547629   10347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:25:52.555167   10347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:25:52.562515   10347 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 02:25:52.562558   10347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:25:52.569520   10347 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:25:52.569537   10347 kubeadm.go:158] found existing configuration files:
	
	I1216 02:25:52.569577   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 02:25:52.576487   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:25:52.576531   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:25:52.583397   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 02:25:52.590986   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:25:52.591038   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:25:52.598043   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 02:25:52.606198   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:25:52.606248   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:25:52.613321   10347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 02:25:52.620578   10347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:25:52.620622   10347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:25:52.627533   10347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 02:25:52.660964   10347 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 02:25:52.661043   10347 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 02:25:52.679212   10347 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 02:25:52.679321   10347 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 02:25:52.679365   10347 kubeadm.go:319] OS: Linux
	I1216 02:25:52.679407   10347 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 02:25:52.679449   10347 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 02:25:52.679495   10347 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 02:25:52.679542   10347 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 02:25:52.679583   10347 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 02:25:52.679651   10347 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 02:25:52.679722   10347 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 02:25:52.679789   10347 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 02:25:52.732786   10347 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 02:25:52.732938   10347 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 02:25:52.733072   10347 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 02:25:52.740356   10347 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 02:25:52.743079   10347 out.go:252]   - Generating certificates and keys ...
	I1216 02:25:52.743188   10347 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 02:25:52.743265   10347 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 02:25:52.864090   10347 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 02:25:53.096154   10347 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 02:25:53.160673   10347 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 02:25:53.786925   10347 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 02:25:54.254541   10347 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 02:25:54.254684   10347 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-568105 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:25:54.740973   10347 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 02:25:54.741098   10347 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-568105 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:25:55.030131   10347 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 02:25:55.293192   10347 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 02:25:55.438431   10347 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 02:25:55.438493   10347 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 02:25:55.511628   10347 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 02:25:55.783281   10347 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 02:25:55.886088   10347 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 02:25:56.053726   10347 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 02:25:56.102775   10347 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 02:25:56.103292   10347 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 02:25:56.107881   10347 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 02:25:56.109374   10347 out.go:252]   - Booting up control plane ...
	I1216 02:25:56.109514   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:25:56.109625   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:25:56.110172   10347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:25:56.122948   10347 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:25:56.123087   10347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:25:56.129134   10347 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:25:56.129413   10347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:25:56.129479   10347 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:25:56.223556   10347 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:25:56.223710   10347 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:25:56.725256   10347 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.784823ms
	I1216 02:25:56.729126   10347 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 02:25:56.729274   10347 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 02:25:56.729367   10347 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 02:25:56.729472   10347 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 02:25:58.219737   10347 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.490428677s
	I1216 02:25:58.733693   10347 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.00446933s
	I1216 02:26:00.230982   10347 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501745765s
	I1216 02:26:00.246959   10347 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 02:26:00.256668   10347 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 02:26:00.265809   10347 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 02:26:00.266118   10347 kubeadm.go:319] [mark-control-plane] Marking the node addons-568105 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 02:26:00.274990   10347 kubeadm.go:319] [bootstrap-token] Using token: pcp3la.vbq2i6sf71q8sp7z
	I1216 02:26:00.276388   10347 out.go:252]   - Configuring RBAC rules ...
	I1216 02:26:00.276526   10347 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 02:26:00.279346   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 02:26:00.284235   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 02:26:00.286296   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 02:26:00.289344   10347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 02:26:00.291382   10347 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 02:26:00.637592   10347 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 02:26:01.051691   10347 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 02:26:01.636744   10347 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 02:26:01.637743   10347 kubeadm.go:319] 
	I1216 02:26:01.637908   10347 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 02:26:01.637928   10347 kubeadm.go:319] 
	I1216 02:26:01.638052   10347 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 02:26:01.638069   10347 kubeadm.go:319] 
	I1216 02:26:01.638104   10347 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 02:26:01.638223   10347 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 02:26:01.638314   10347 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 02:26:01.638324   10347 kubeadm.go:319] 
	I1216 02:26:01.638370   10347 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 02:26:01.638376   10347 kubeadm.go:319] 
	I1216 02:26:01.638415   10347 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 02:26:01.638421   10347 kubeadm.go:319] 
	I1216 02:26:01.638463   10347 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 02:26:01.638528   10347 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 02:26:01.638588   10347 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 02:26:01.638598   10347 kubeadm.go:319] 
	I1216 02:26:01.638692   10347 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 02:26:01.638804   10347 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 02:26:01.638836   10347 kubeadm.go:319] 
	I1216 02:26:01.638926   10347 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token pcp3la.vbq2i6sf71q8sp7z \
	I1216 02:26:01.639067   10347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 02:26:01.639099   10347 kubeadm.go:319] 	--control-plane 
	I1216 02:26:01.639113   10347 kubeadm.go:319] 
	I1216 02:26:01.639246   10347 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 02:26:01.639260   10347 kubeadm.go:319] 
	I1216 02:26:01.639390   10347 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token pcp3la.vbq2i6sf71q8sp7z \
	I1216 02:26:01.639525   10347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 02:26:01.641369   10347 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 02:26:01.641599   10347 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:26:01.641628   10347 cni.go:84] Creating CNI manager for ""
	I1216 02:26:01.641637   10347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:26:01.643234   10347 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 02:26:01.644450   10347 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 02:26:01.648499   10347 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 02:26:01.648517   10347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 02:26:01.660768   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 02:26:01.856617   10347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 02:26:01.856696   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:01.856724   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-568105 minikube.k8s.io/updated_at=2025_12_16T02_26_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-568105 minikube.k8s.io/primary=true
	I1216 02:26:01.936760   10347 ops.go:34] apiserver oom_adj: -16
	I1216 02:26:01.936766   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:02.436902   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:02.936888   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:03.437599   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:03.936944   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:04.437884   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:04.937002   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:05.437616   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:05.936923   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:06.436891   10347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:06.498764   10347 kubeadm.go:1114] duration metric: took 4.642125373s to wait for elevateKubeSystemPrivileges
	I1216 02:26:06.498799   10347 kubeadm.go:403] duration metric: took 13.976480172s to StartCluster
	I1216 02:26:06.498838   10347 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:06.498979   10347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:26:06.499527   10347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:06.499734   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 02:26:06.499779   10347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:26:06.499841   10347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 02:26:06.499980   10347 addons.go:70] Setting default-storageclass=true in profile "addons-568105"
	I1216 02:26:06.499991   10347 addons.go:70] Setting yakd=true in profile "addons-568105"
	I1216 02:26:06.499999   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:06.500012   10347 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-568105"
	I1216 02:26:06.500026   10347 addons.go:70] Setting registry=true in profile "addons-568105"
	I1216 02:26:06.500037   10347 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-568105"
	I1216 02:26:06.500006   10347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-568105"
	I1216 02:26:06.500041   10347 addons.go:70] Setting metrics-server=true in profile "addons-568105"
	I1216 02:26:06.500049   10347 addons.go:239] Setting addon registry=true in "addons-568105"
	I1216 02:26:06.500057   10347 addons.go:239] Setting addon metrics-server=true in "addons-568105"
	I1216 02:26:06.500041   10347 addons.go:70] Setting ingress-dns=true in profile "addons-568105"
	I1216 02:26:06.500077   10347 addons.go:70] Setting cloud-spanner=true in profile "addons-568105"
	I1216 02:26:06.500080   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500090   10347 addons.go:239] Setting addon ingress-dns=true in "addons-568105"
	I1216 02:26:06.500098   10347 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-568105"
	I1216 02:26:06.500082   10347 addons.go:70] Setting inspektor-gadget=true in profile "addons-568105"
	I1216 02:26:06.500129   10347 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-568105"
	I1216 02:26:06.500143   10347 addons.go:239] Setting addon inspektor-gadget=true in "addons-568105"
	I1216 02:26:06.500152   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500162   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500189   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500200   10347 addons.go:70] Setting ingress=true in profile "addons-568105"
	I1216 02:26:06.500213   10347 addons.go:239] Setting addon ingress=true in "addons-568105"
	I1216 02:26:06.500237   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500406   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500568   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500639   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500658   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500675   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500680   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500858   10347 addons.go:70] Setting gcp-auth=true in profile "addons-568105"
	I1216 02:26:06.500901   10347 mustload.go:66] Loading cluster: addons-568105
	I1216 02:26:06.501081   10347 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:06.501393   10347 addons.go:70] Setting registry-creds=true in profile "addons-568105"
	I1216 02:26:06.501411   10347 addons.go:239] Setting addon registry-creds=true in "addons-568105"
	I1216 02:26:06.501435   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.501905   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500017   10347 addons.go:239] Setting addon yakd=true in "addons-568105"
	I1216 02:26:06.502398   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.502525   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.500071   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.503969   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.505147   10347 addons.go:70] Setting volumesnapshots=true in profile "addons-568105"
	I1216 02:26:06.505181   10347 addons.go:239] Setting addon volumesnapshots=true in "addons-568105"
	I1216 02:26:06.505227   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.500021   10347 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-568105"
	I1216 02:26:06.505479   10347 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-568105"
	I1216 02:26:06.505510   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.505755   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.506009   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.506246   10347 out.go:179] * Verifying Kubernetes components...
	I1216 02:26:06.506278   10347 addons.go:70] Setting volcano=true in profile "addons-568105"
	I1216 02:26:06.506281   10347 addons.go:70] Setting storage-provisioner=true in profile "addons-568105"
	I1216 02:26:06.506295   10347 addons.go:239] Setting addon volcano=true in "addons-568105"
	I1216 02:26:06.506300   10347 addons.go:239] Setting addon storage-provisioner=true in "addons-568105"
	I1216 02:26:06.500092   10347 addons.go:239] Setting addon cloud-spanner=true in "addons-568105"
	I1216 02:26:06.506321   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506325   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506331   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.506268   10347 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-568105"
	I1216 02:26:06.506503   10347 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-568105"
	I1216 02:26:06.500092   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.509938   10347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:26:06.513446   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.513868   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.514463   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.515375   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.516349   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.516655   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.558966   10347 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 02:26:06.560680   10347 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:06.560705   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 02:26:06.560768   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.569861   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:06.570268   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 02:26:06.573272   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 02:26:06.574622   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 02:26:06.574809   10347 addons.go:239] Setting addon default-storageclass=true in "addons-568105"
	I1216 02:26:06.574897   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.575478   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.576138   10347 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 02:26:06.576159   10347 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 02:26:06.576170   10347 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 02:26:06.577616   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 02:26:06.577654   10347 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:06.577664   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 02:26:06.577715   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.578495   10347 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:06.578519   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 02:26:06.578588   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.579480   10347 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 02:26:06.580352   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 02:26:06.580475   10347 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 02:26:06.580486   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 02:26:06.580541   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.582040   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 02:26:06.582841   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 02:26:06.585983   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:06.588000   10347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:26:06.588220   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 02:26:06.589254   10347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:06.589275   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:26:06.589335   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.589764   10347 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:06.589783   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 02:26:06.589845   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.591332   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 02:26:06.591975   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.592360   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 02:26:06.592380   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 02:26:06.592426   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.592591   10347 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 02:26:06.593710   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 02:26:06.593728   10347 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 02:26:06.593785   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.609678   10347 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 02:26:06.611870   10347 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 02:26:06.612552   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 02:26:06.612569   10347 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 02:26:06.612628   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.613304   10347 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:06.613319   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 02:26:06.613379   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.615302   10347 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 02:26:06.616633   10347 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:06.616934   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 02:26:06.617140   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	W1216 02:26:06.616707   10347 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 02:26:06.617655   10347 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-568105"
	I1216 02:26:06.617710   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:06.618268   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:06.625282   10347 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 02:26:06.626480   10347 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:06.626500   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 02:26:06.626557   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.633238   10347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:06.633262   10347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:26:06.633331   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.645207   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.646130   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.648253   10347 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 02:26:06.649352   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 02:26:06.649384   10347 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 02:26:06.649448   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.654890   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.661327   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.669848   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.679770   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.687058   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688047   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688373   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.688454   10347 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 02:26:06.690570   10347 out.go:179]   - Using image docker.io/busybox:stable
	I1216 02:26:06.690663   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.692106   10347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:06.692125   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 02:26:06.692182   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:06.698266   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.704932   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.706432   10347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 02:26:06.724104   10347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:26:06.724481   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.730995   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.739156   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:06.855807   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:06.860299   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 02:26:06.860320   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 02:26:06.865082   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:06.866396   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:06.876716   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:06.878046   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 02:26:06.878071   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 02:26:06.879156   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:06.881285   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:06.884349   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:06.917295   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 02:26:06.917327   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 02:26:06.931646   10347 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 02:26:06.931685   10347 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 02:26:06.936461   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:06.938418   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 02:26:06.938463   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 02:26:06.945899   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 02:26:06.945930   10347 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 02:26:06.947286   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:06.961505   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 02:26:06.961536   10347 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 02:26:06.975027   10347 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 02:26:06.975058   10347 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 02:26:06.980704   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:06.990433   10347 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:06.990462   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 02:26:06.999623   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 02:26:06.999657   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 02:26:07.016992   10347 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:07.017025   10347 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 02:26:07.039533   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 02:26:07.039562   10347 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 02:26:07.052285   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 02:26:07.052333   10347 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 02:26:07.057333   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:07.065343   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 02:26:07.065395   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 02:26:07.071750   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:07.089903   10347 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:07.089932   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 02:26:07.123965   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 02:26:07.123992   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 02:26:07.127914   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 02:26:07.127938   10347 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 02:26:07.148202   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:07.194760   10347 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 02:26:07.194792   10347 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 02:26:07.210000   10347 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:07.210026   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 02:26:07.274043   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:07.283256   10347 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 02:26:07.287020   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 02:26:07.287053   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 02:26:07.287974   10347 node_ready.go:35] waiting up to 6m0s for node "addons-568105" to be "Ready" ...
	I1216 02:26:07.336406   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 02:26:07.336440   10347 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 02:26:07.388258   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 02:26:07.388287   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 02:26:07.431679   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 02:26:07.431706   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 02:26:07.458923   10347 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:07.458953   10347 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 02:26:07.491411   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:07.799148   10347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-568105" context rescaled to 1 replicas
	I1216 02:26:08.048936   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193074656s)
	I1216 02:26:08.048975   10347 addons.go:495] Verifying addon ingress=true in "addons-568105"
	I1216 02:26:08.049002   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.183888431s)
	I1216 02:26:08.049090   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.182673618s)
	I1216 02:26:08.049144   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.172399638s)
	I1216 02:26:08.049218   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.170028525s)
	I1216 02:26:08.049270   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167959139s)
	I1216 02:26:08.049322   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.164948993s)
	I1216 02:26:08.049383   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112894818s)
	I1216 02:26:08.049437   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.102111409s)
	I1216 02:26:08.049517   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.068780119s)
	I1216 02:26:08.049582   10347 addons.go:495] Verifying addon registry=true in "addons-568105"
	I1216 02:26:08.049650   10347 addons.go:495] Verifying addon metrics-server=true in "addons-568105"
	I1216 02:26:08.050436   10347 out.go:179] * Verifying ingress addon...
	I1216 02:26:08.051282   10347 out.go:179] * Verifying registry addon...
	I1216 02:26:08.052808   10347 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 02:26:08.057317   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 02:26:08.058156   10347 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1216 02:26:08.058836   10347 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 02:26:08.062272   10347 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 02:26:08.062294   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:08.488200   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.339942347s)
	W1216 02:26:08.488262   10347 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:08.488296   10347 retry.go:31] will retry after 345.601988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:08.488375   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.21430215s)
	I1216 02:26:08.488863   10347 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-568105"
	I1216 02:26:08.489772   10347 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 02:26:08.489777   10347 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-568105 service yakd-dashboard -n yakd-dashboard
	
	I1216 02:26:08.492365   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 02:26:08.495888   10347 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 02:26:08.495918   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:08.596419   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:08.596515   10347 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 02:26:08.596536   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:08.834760   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:08.996003   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:09.097348   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:09.097399   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1216 02:26:09.291178   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:09.496061   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:09.596512   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:09.596732   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:09.995475   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:10.096443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:10.096622   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:10.495460   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:10.555989   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:10.559528   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:10.996208   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:11.096506   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:11.096680   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:11.260052   10347 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.425243565s)
	I1216 02:26:11.495752   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:11.595882   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:11.596075   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1216 02:26:11.790538   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:11.995677   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:12.096420   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:12.096629   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:12.495768   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:12.556227   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:12.559994   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:12.995484   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:13.096354   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:13.096410   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:13.495849   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:13.556410   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:13.560261   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:13.791387   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:13.996228   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:14.096511   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:14.096568   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:14.199797   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 02:26:14.199880   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:14.217865   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:14.319486   10347 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 02:26:14.331369   10347 addons.go:239] Setting addon gcp-auth=true in "addons-568105"
	I1216 02:26:14.331429   10347 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:26:14.331767   10347 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:26:14.349501   10347 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 02:26:14.349552   10347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:26:14.367213   10347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:26:14.462505   10347 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:14.463876   10347 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 02:26:14.464879   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 02:26:14.464893   10347 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 02:26:14.476946   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 02:26:14.476970   10347 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 02:26:14.488935   10347 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:14.488954   10347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 02:26:14.495702   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:14.501493   10347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:14.556095   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:14.560434   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:14.790177   10347 addons.go:495] Verifying addon gcp-auth=true in "addons-568105"
	I1216 02:26:14.791491   10347 out.go:179] * Verifying gcp-auth addon...
	I1216 02:26:14.793468   10347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 02:26:14.795159   10347 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 02:26:14.795172   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:14.996184   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:15.055712   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:15.059413   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:15.296335   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:15.494650   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:15.556081   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:15.559707   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:15.796489   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:15.995290   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:16.055873   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:16.059897   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:16.291320   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:16.296245   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:16.495691   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:16.556172   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:16.560028   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:16.796071   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:16.995547   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:17.056027   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:17.059670   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:17.296074   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:17.495628   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:17.556222   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:17.560106   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:17.796559   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:17.995411   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:18.056032   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:18.059618   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:18.296087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:18.495627   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:18.556033   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:18.559768   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:18.791086   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:18.795979   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:18.995383   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:19.055933   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:19.059853   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:19.296266   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:19.496031   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:19.555498   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:19.559060   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:19.796557   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:19.995155   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:20.055728   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:20.059416   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:20.296420   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:20.494894   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:20.556329   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:20.560163   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:20.796057   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:20.995047   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:21.055294   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:21.060197   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:21.290686   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:21.295656   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:21.494977   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:21.555554   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:21.559489   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:21.796634   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:21.995570   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:22.055961   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:22.059717   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:22.296215   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:22.495868   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:22.556244   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:22.559974   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:22.795522   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:22.995029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:23.056435   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:23.060360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:23.290994   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:23.295802   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:23.495287   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:23.555676   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:23.559546   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:23.796162   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:23.995986   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:24.055362   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:24.060140   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:24.296332   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:24.495668   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:24.555953   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:24.559710   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:24.796134   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:24.995629   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:25.055994   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:25.059673   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:25.291218   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:25.296260   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:25.495523   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:25.555611   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:25.559473   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:25.795687   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:25.995479   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:26.055921   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:26.059669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:26.296321   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:26.495918   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:26.556402   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:26.560207   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:26.795670   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:26.995360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:27.055938   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:27.059671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:27.291429   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:27.296441   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:27.495957   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:27.556235   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:27.560039   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:27.796374   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:27.995180   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:28.055788   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:28.059432   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:28.295887   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:28.495486   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:28.555927   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:28.559570   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:28.796178   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:28.995706   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:29.056242   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:29.060091   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:29.295645   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:29.494961   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:29.556510   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:29.559134   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:29.790661   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:29.796190   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:29.996222   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:30.055475   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:30.059204   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:30.296270   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:30.495606   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:30.555998   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:30.559707   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:30.796331   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:30.994777   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:31.056162   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:31.060098   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:31.296384   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:31.494704   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:31.556504   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:31.559292   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:31.796008   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:31.995955   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:32.056398   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:32.060280   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:32.290870   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:32.295808   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:32.495385   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:32.555831   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:32.559669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:32.796089   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:32.995676   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:33.055991   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:33.059845   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:33.296165   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:33.495526   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:33.555925   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:33.559893   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:33.796403   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:33.995042   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:34.055437   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:34.060207   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:34.295804   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:34.495202   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:34.555847   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:34.559507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:34.791024   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:34.795758   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:34.995141   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:35.055705   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:35.059465   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:35.295856   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:35.495590   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:35.556344   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:35.560143   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:35.796268   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:35.995762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:36.056118   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:36.059861   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:36.296093   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:36.495562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:36.556058   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:36.560051   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:36.796536   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:36.995212   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:37.055784   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:37.059762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:37.291219   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:37.296279   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:37.495945   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:37.556357   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:37.560219   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:37.795762   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:37.995304   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:38.055651   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:38.059500   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:38.295765   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:38.495142   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:38.555516   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:38.559269   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:38.795669   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:38.995373   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:39.055977   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:39.059744   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:39.291359   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:39.296583   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:39.494902   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:39.555166   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:39.559886   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:39.796699   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:39.995443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:40.055750   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:40.059568   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:40.296300   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:40.494863   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:40.556718   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:40.559352   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:40.795692   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:40.995271   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:41.055664   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:41.059403   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:41.295847   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:41.495086   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:41.555644   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:41.559448   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:41.791085   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:41.796307   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:41.995177   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:42.055738   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:42.059498   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:42.296047   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:42.495514   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:42.556160   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:42.559967   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:42.796640   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:42.995042   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:43.055324   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:43.060065   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:43.296507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:43.494684   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:43.556089   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:43.559722   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:43.791320   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:43.796349   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:43.995798   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:44.056423   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:44.060330   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:44.295784   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:44.495346   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:44.555682   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:44.559397   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:44.796101   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:44.995309   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:45.055741   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:45.059563   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:45.296317   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:45.494786   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:45.556224   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:45.559968   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:45.796572   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:45.994929   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:46.056321   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:46.060163   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 02:26:46.290649   10347 node_ready.go:57] node "addons-568105" has "Ready":"False" status (will retry)
	I1216 02:26:46.295569   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:46.495101   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:46.555510   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:46.559254   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:46.796260   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:46.995094   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:47.056025   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:47.059749   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:47.295677   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:47.495267   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:47.555968   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:47.559911   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:47.796252   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:47.996314   10347 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 02:26:47.996338   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:48.058443   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:48.059366   10347 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 02:26:48.059381   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:48.293182   10347 node_ready.go:49] node "addons-568105" is "Ready"
	I1216 02:26:48.293216   10347 node_ready.go:38] duration metric: took 41.00521133s for node "addons-568105" to be "Ready" ...
	I1216 02:26:48.293239   10347 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:26:48.293300   10347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:26:48.297581   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:48.312717   10347 api_server.go:72] duration metric: took 41.812902671s to wait for apiserver process to appear ...
	I1216 02:26:48.312744   10347 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:26:48.312771   10347 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 02:26:48.318213   10347 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 02:26:48.319544   10347 api_server.go:141] control plane version: v1.34.2
	I1216 02:26:48.319577   10347 api_server.go:131] duration metric: took 6.825655ms to wait for apiserver health ...
	I1216 02:26:48.319588   10347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 02:26:48.401858   10347 system_pods.go:59] 20 kube-system pods found
	I1216 02:26:48.401903   10347 system_pods.go:61] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.401914   10347 system_pods.go:61] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.401940   10347 system_pods.go:61] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.401949   10347 system_pods.go:61] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.401958   10347 system_pods.go:61] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.401965   10347 system_pods.go:61] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.401971   10347 system_pods.go:61] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.401976   10347 system_pods.go:61] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.401981   10347 system_pods.go:61] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.401990   10347 system_pods.go:61] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.401996   10347 system_pods.go:61] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.402002   10347 system_pods.go:61] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.402010   10347 system_pods.go:61] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.402019   10347 system_pods.go:61] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.402027   10347 system_pods.go:61] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.402035   10347 system_pods.go:61] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.402042   10347 system_pods.go:61] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.402053   10347 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.402061   10347 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.402069   10347 system_pods.go:61] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.402078   10347 system_pods.go:74] duration metric: took 82.482524ms to wait for pod list to return data ...
	I1216 02:26:48.402090   10347 default_sa.go:34] waiting for default service account to be created ...
	I1216 02:26:48.404596   10347 default_sa.go:45] found service account: "default"
	I1216 02:26:48.404620   10347 default_sa.go:55] duration metric: took 2.5239ms for default service account to be created ...
	I1216 02:26:48.404631   10347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 02:26:48.407720   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.407747   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.407754   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.407763   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.407771   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.407780   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.407785   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.407792   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.407801   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.407807   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.407827   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.407833   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.407840   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.407853   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.407861   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.407870   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.407883   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.407891   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.407901   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.407908   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.407915   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.407931   10347 retry.go:31] will retry after 209.008811ms: missing components: kube-dns
	I1216 02:26:48.498552   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:48.556427   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:48.560069   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:48.622367   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.622404   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.622415   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.622427   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.622437   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.622479   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.622486   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.622493   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.622499   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.622569   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.622603   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.622763   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.622771   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.622779   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.622788   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.622839   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.622852   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.622861   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.622870   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.622879   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.622893   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.622911   10347 retry.go:31] will retry after 239.273402ms: missing components: kube-dns
	I1216 02:26:48.797779   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:48.867516   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:48.867552   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:48.867563   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:48.867572   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:48.867588   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:48.867599   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:48.867607   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:48.867613   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:48.867619   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:48.867625   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:48.867634   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:48.867645   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:48.867652   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:48.867664   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:48.867678   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:48.867688   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:48.867699   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:48.867708   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:48.867717   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.867729   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:48.867746   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:48.867767   10347 retry.go:31] will retry after 364.128275ms: missing components: kube-dns
	I1216 02:26:48.995983   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:49.058327   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:49.060421   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:49.237410   10347 system_pods.go:86] 20 kube-system pods found
	I1216 02:26:49.237441   10347 system_pods.go:89] "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:49.237449   10347 system_pods.go:89] "coredns-66bc5c9577-cjv67" [bcd61c89-5a7b-467b-a368-8b8a3808d205] Running
	I1216 02:26:49.237460   10347 system_pods.go:89] "csi-hostpath-attacher-0" [705d877d-4ff8-4bf3-86b0-47ca46a7ce66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 02:26:49.237468   10347 system_pods.go:89] "csi-hostpath-resizer-0" [b7d0ed06-d53e-418c-95e3-9fad1875df89] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 02:26:49.237479   10347 system_pods.go:89] "csi-hostpathplugin-hd2bb" [8da3e85d-1adb-4ab0-a6a8-722c94364939] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 02:26:49.237485   10347 system_pods.go:89] "etcd-addons-568105" [6b7547f1-b61d-4c35-9a0c-aae745134557] Running
	I1216 02:26:49.237491   10347 system_pods.go:89] "kindnet-7cvb5" [46124e77-eb66-4ff8-9130-6c00db12ef59] Running
	I1216 02:26:49.237496   10347 system_pods.go:89] "kube-apiserver-addons-568105" [8547ed2d-f244-4110-a571-368b5c3b7cd2] Running
	I1216 02:26:49.237508   10347 system_pods.go:89] "kube-controller-manager-addons-568105" [01dc49b2-243b-4a44-97ed-75c3c3313064] Running
	I1216 02:26:49.237535   10347 system_pods.go:89] "kube-ingress-dns-minikube" [bf48c764-515d-455e-8680-12747cedf14d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:49.237545   10347 system_pods.go:89] "kube-proxy-plzgj" [8dd10f3d-e6b0-4042-b032-70c0961ebbcb] Running
	I1216 02:26:49.237552   10347 system_pods.go:89] "kube-scheduler-addons-568105" [d34cf008-f31c-4232-a8b1-72687197eb0b] Running
	I1216 02:26:49.237559   10347 system_pods.go:89] "metrics-server-85b7d694d7-v6wb9" [860edbc2-c8b2-432e-a963-922f42e3ecf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:49.237567   10347 system_pods.go:89] "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:49.237575   10347 system_pods.go:89] "registry-6b586f9694-b7vlw" [e80e9c6a-6c21-49f1-93c1-7a9a3cef2446] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:49.237593   10347 system_pods.go:89] "registry-creds-764b6fb674-d6sz6" [e986d132-c5e7-42d8-b08d-ede7ad0a002a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:49.237603   10347 system_pods.go:89] "registry-proxy-gx76q" [729984e0-c1a4-40b2-a423-8778d4fedd1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:49.237610   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cl5vk" [20572e30-8ca6-499a-9619-14d1c0fce221] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:49.237622   10347 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zzrsb" [f1fb1b13-351a-4424-865a-dadc340c3728] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:49.237627   10347 system_pods.go:89] "storage-provisioner" [1122aa5b-69d7-43cb-8b25-eb5647aa58a5] Running
	I1216 02:26:49.237638   10347 system_pods.go:126] duration metric: took 832.999889ms to wait for k8s-apps to be running ...
	I1216 02:26:49.237648   10347 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 02:26:49.237695   10347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:26:49.254921   10347 system_svc.go:56] duration metric: took 17.265442ms WaitForService to wait for kubelet
	I1216 02:26:49.254955   10347 kubeadm.go:587] duration metric: took 42.755146154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:26:49.254981   10347 node_conditions.go:102] verifying NodePressure condition ...
	I1216 02:26:49.258113   10347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 02:26:49.258141   10347 node_conditions.go:123] node cpu capacity is 8
	I1216 02:26:49.258168   10347 node_conditions.go:105] duration metric: took 3.175466ms to run NodePressure ...
	I1216 02:26:49.258188   10347 start.go:242] waiting for startup goroutines ...
	I1216 02:26:49.297253   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:49.496432   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:49.557050   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:49.560734   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:49.797157   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:49.996273   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:50.056143   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:50.060549   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:50.296966   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:50.496855   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:50.556450   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:50.560632   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:50.796627   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:50.995767   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:51.056765   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:51.059777   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:51.296772   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:51.495982   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:51.556958   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:51.559958   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:51.797262   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:51.995598   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:52.056445   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:52.060714   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:52.297219   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:52.496448   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:52.555849   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:52.559876   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:52.831142   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:52.996340   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:53.056180   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.060283   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.297181   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:53.496323   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:53.596576   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.596591   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.796012   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:53.996911   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.056685   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.157443   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.297351   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:54.496562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.556313   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.560427   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.796681   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:54.996360   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.056015   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.097562   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.296617   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.496327   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.555737   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.560460   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.796958   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.996675   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.056537   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.059854   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.296660   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.495883   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.556837   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.559891   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.796653   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.995643   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.056071   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.060172   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.297269   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.497126   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.557240   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.560507   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.796548   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.995434   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.056252   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.062151   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.295686   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.495230   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.555722   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.559531   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.796087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.999078   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.055697   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.061766   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.296985   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.496056   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.556749   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.560300   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.796811   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.995853   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.056464   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.060680   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.297094   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.496029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.577006   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.627648   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.797284   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.996046   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.056650   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.059520   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.296294   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.496636   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.556236   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.560262   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.797072   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.996976   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.056768   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.060213   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.297185   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.496913   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.556416   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.560372   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.795998   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.995602   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.055946   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.059878   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.297029   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.496363   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.556181   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.560500   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.796435   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.995601   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.056362   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.060506   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.297543   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.495858   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.559014   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.560022   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.796551   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.995365   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.055759   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.059428   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.296224   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.496072   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.556627   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.559492   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.795728   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.996027   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.056528   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.059671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.297221   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.495901   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.556343   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.560068   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.797757   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.995671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.056064   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.060445   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.296671   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.495714   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.556250   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.560792   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.796891   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.996517   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.056236   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.060516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.296487   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.495704   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.555777   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.559483   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.796835   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.995747   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.056588   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.059516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.296527   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.495499   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.556073   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.560006   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.796558   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.995467   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.056317   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.060672   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.296914   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.496153   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.555501   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.560291   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.797388   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.995157   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.056245   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.060356   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.296210   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.496278   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.596593   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.596605   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.798624   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.995917   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.056595   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.059792   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:12.297899   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.496196   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.556209   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.560960   10347 kapi.go:107] duration metric: took 1m4.503640416s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 02:27:12.797348   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.996312   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.056119   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.379194   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.496321   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.555998   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.796697   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.995950   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.056267   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.297208   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.496366   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.556309   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.797412   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.995628   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.096079   10347 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:15.296943   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.495663   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.556223   10347 kapi.go:107] duration metric: took 1m7.503412117s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 02:27:15.796629   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.995763   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.296695   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.495771   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.796516   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.996213   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.297100   10347 kapi.go:107] duration metric: took 1m2.50362929s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 02:27:17.298145   10347 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-568105 cluster.
	I1216 02:27:17.299254   10347 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 02:27:17.300620   10347 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 02:27:17.497087   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.995604   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.496074   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.997036   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.496345   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.996054   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.496652   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.996065   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.496991   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.995906   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.496148   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.996066   10347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:23.495637   10347 kapi.go:107] duration metric: took 1m15.003275635s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 02:27:23.497271   10347 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, registry-creds, inspektor-gadget, nvidia-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 02:27:23.498377   10347 addons.go:530] duration metric: took 1m16.998540371s for enable addons: enabled=[cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner registry-creds inspektor-gadget nvidia-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 02:27:23.498411   10347 start.go:247] waiting for cluster config update ...
	I1216 02:27:23.498427   10347 start.go:256] writing updated cluster config ...
	I1216 02:27:23.498661   10347 ssh_runner.go:195] Run: rm -f paused
	I1216 02:27:23.502564   10347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:27:23.505236   10347 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cjv67" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.508999   10347 pod_ready.go:94] pod "coredns-66bc5c9577-cjv67" is "Ready"
	I1216 02:27:23.509021   10347 pod_ready.go:86] duration metric: took 3.765345ms for pod "coredns-66bc5c9577-cjv67" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.510593   10347 pod_ready.go:83] waiting for pod "etcd-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.513667   10347 pod_ready.go:94] pod "etcd-addons-568105" is "Ready"
	I1216 02:27:23.513683   10347 pod_ready.go:86] duration metric: took 3.074152ms for pod "etcd-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.515263   10347 pod_ready.go:83] waiting for pod "kube-apiserver-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.518326   10347 pod_ready.go:94] pod "kube-apiserver-addons-568105" is "Ready"
	I1216 02:27:23.518344   10347 pod_ready.go:86] duration metric: took 3.062383ms for pod "kube-apiserver-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.519841   10347 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:23.905735   10347 pod_ready.go:94] pod "kube-controller-manager-addons-568105" is "Ready"
	I1216 02:27:23.905759   10347 pod_ready.go:86] duration metric: took 385.903954ms for pod "kube-controller-manager-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.106789   10347 pod_ready.go:83] waiting for pod "kube-proxy-plzgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.506138   10347 pod_ready.go:94] pod "kube-proxy-plzgj" is "Ready"
	I1216 02:27:24.506163   10347 pod_ready.go:86] duration metric: took 399.349752ms for pod "kube-proxy-plzgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:24.707330   10347 pod_ready.go:83] waiting for pod "kube-scheduler-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:25.106642   10347 pod_ready.go:94] pod "kube-scheduler-addons-568105" is "Ready"
	I1216 02:27:25.106669   10347 pod_ready.go:86] duration metric: took 399.31319ms for pod "kube-scheduler-addons-568105" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:27:25.106680   10347 pod_ready.go:40] duration metric: took 1.604089765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:27:25.150433   10347 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 02:27:25.152995   10347 out.go:179] * Done! kubectl is now configured to use "addons-568105" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 02:27:22 addons-568105 crio[772]: time="2025-12-16T02:27:22.972747316Z" level=info msg="Starting container: 5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e" id=29dc895d-6a2b-4255-a3d8-def129b8063d name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 02:27:22 addons-568105 crio[772]: time="2025-12-16T02:27:22.975228922Z" level=info msg="Started container" PID=6068 containerID=5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e description=kube-system/csi-hostpathplugin-hd2bb/csi-snapshotter id=29dc895d-6a2b-4255-a3d8-def129b8063d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d32592273ae9576f31b69740dc8d6d89d9961d1c6590db1da39029405bd3aa25
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.971453187Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3d1e22b1-4366-4004-8ad9-2b28edac7252 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.971511694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.977120255Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8afd7b5b116edb495331c8d19beab954e954ba67155383cb515e94ffe0cd4d83 UID:12352787-47ea-402d-9f11-e5894590b258 NetNS:/var/run/netns/205c5eb7-be7f-41b3-a0a9-2df3b54b2565 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00055e2b8}] Aliases:map[]}"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.977148042Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.986603698Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8afd7b5b116edb495331c8d19beab954e954ba67155383cb515e94ffe0cd4d83 UID:12352787-47ea-402d-9f11-e5894590b258 NetNS:/var/run/netns/205c5eb7-be7f-41b3-a0a9-2df3b54b2565 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00055e2b8}] Aliases:map[]}"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.986712495Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.987484479Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.98831428Z" level=info msg="Ran pod sandbox 8afd7b5b116edb495331c8d19beab954e954ba67155383cb515e94ffe0cd4d83 with infra container: default/busybox/POD" id=3d1e22b1-4366-4004-8ad9-2b28edac7252 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.989410527Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e2371b3-3215-4848-93a9-b6163e336bcf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.989502793Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e2371b3-3215-4848-93a9-b6163e336bcf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.989530296Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8e2371b3-3215-4848-93a9-b6163e336bcf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.990058872Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bd278f85-0d5f-40ff-9da5-37cfd62ddae7 name=/runtime.v1.ImageService/PullImage
	Dec 16 02:27:25 addons-568105 crio[772]: time="2025-12-16T02:27:25.991598011Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.274513853Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=bd278f85-0d5f-40ff-9da5-37cfd62ddae7 name=/runtime.v1.ImageService/PullImage
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.275048545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00b4cd18-5c28-4efa-94b0-1898ea1da4a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.27629479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3bfe5d44-24b1-48e0-b974-7babab5ecc4c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.279425824Z" level=info msg="Creating container: default/busybox/busybox" id=042007ba-cefe-40e7-8d14-7153f6fee571 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.279520947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.284699681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.285169841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.312924964Z" level=info msg="Created container 44e24f2e49438da64edaf334e62b778698474bd092d63e623d69aae87fff9d16: default/busybox/busybox" id=042007ba-cefe-40e7-8d14-7153f6fee571 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.31349785Z" level=info msg="Starting container: 44e24f2e49438da64edaf334e62b778698474bd092d63e623d69aae87fff9d16" id=98f352c9-16ae-4fe8-979c-430e2dc2700e name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 02:27:27 addons-568105 crio[772]: time="2025-12-16T02:27:27.315063943Z" level=info msg="Started container" PID=6186 containerID=44e24f2e49438da64edaf334e62b778698474bd092d63e623d69aae87fff9d16 description=default/busybox/busybox id=98f352c9-16ae-4fe8-979c-430e2dc2700e name=/runtime.v1.RuntimeService/StartContainer sandboxID=8afd7b5b116edb495331c8d19beab954e954ba67155383cb515e94ffe0cd4d83
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	44e24f2e49438       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   8afd7b5b116ed       busybox                                     default
	5a9662216a426       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7d237fed170b0       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	ae9b9276f546b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7e47591ff7931       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	8136945c1766e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            17 seconds ago       Running             gadget                                   0                   8c837d91c6fd1       gadget-qf8c2                                gadget
	e1658f146a4d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	7f1d5f2b42a6d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 19 seconds ago       Running             gcp-auth                                 0                   642a2eb535cd1       gcp-auth-78565c9fb4-8dg8c                   gcp-auth
	c6973891ec7ff       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             21 seconds ago       Running             controller                               0                   48ad19d4a1f48       ingress-nginx-controller-85d4c799dd-dwmcj   ingress-nginx
	978c45196be43       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   5d4f062dcb613       registry-proxy-gx76q                        kube-system
	dbded21ce9b6c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   27 seconds ago       Running             csi-external-health-monitor-controller   0                   d32592273ae95       csi-hostpathplugin-hd2bb                    kube-system
	5258c264d4ef1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     27 seconds ago       Running             amd-gpu-device-plugin                    0                   1dcf88e800b08       amd-gpu-device-plugin-zpwqw                 kube-system
	f2b1c7c11696c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     28 seconds ago       Running             nvidia-device-plugin-ctr                 0                   c2f1828389bd2       nvidia-device-plugin-daemonset-kzstn        kube-system
	c12ac696e0118       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   31 seconds ago       Exited              patch                                    0                   7b0e2fd3c625f       gcp-auth-certs-patch-wpxtt                  gcp-auth
	1034828f8f006       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   5833967738d50       snapshot-controller-7d9fbc56b8-zzrsb        kube-system
	51cd2f7227a66       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   e01bac62c5e52       snapshot-controller-7d9fbc56b8-cl5vk        kube-system
	f07eb262fc567       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   d7bf72bb58d79       csi-hostpath-resizer-0                      kube-system
	7f18a86c1651e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             33 seconds ago       Exited              patch                                    1                   36ae70cc61847       ingress-nginx-admission-patch-btk4c         ingress-nginx
	218d4e28f821c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   33 seconds ago       Exited              create                                   0                   6bd1a8fedda8d       ingress-nginx-admission-create-b9ppx        ingress-nginx
	c790a5dda1f08       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   72add02616452       csi-hostpath-attacher-0                     kube-system
	72d3fd952d2c3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago       Exited              create                                   0                   52421ef1a9efa       gcp-auth-certs-create-p8gm9                 gcp-auth
	777da03eb5c22       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               35 seconds ago       Running             cloud-spanner-emulator                   0                   9e3700b012857       cloud-spanner-emulator-5bdddb765-r5xh9      default
	c3d2e4a1a0c55       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           38 seconds ago       Running             registry                                 0                   b83c3c38ebae0       registry-6b586f9694-b7vlw                   kube-system
	b41f823822869       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              39 seconds ago       Running             yakd                                     0                   ec5fa893ac51b       yakd-dashboard-5ff678cb9-hsz94              yakd-dashboard
	aacc04b82103a       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        41 seconds ago       Running             metrics-server                           0                   d5840fd038606       metrics-server-85b7d694d7-v6wb9             kube-system
	2c63fbd589fcf       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   ade576fb9e8e1       local-path-provisioner-648f6765c9-72tvv     local-path-storage
	4e4882ff4f3f0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               43 seconds ago       Running             minikube-ingress-dns                     0                   99985094a15f4       kube-ingress-dns-minikube                   kube-system
	df8bdac96f7e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   92dea7f717284       coredns-66bc5c9577-cjv67                    kube-system
	ae4534dbc38ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   03f5001ddf260       storage-provisioner                         kube-system
	4472bad932d44       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   04303bfba66e3       kube-proxy-plzgj                            kube-system
	42bdabbf350a0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   c07898e0059b1       kindnet-7cvb5                               kube-system
	168b7336b0d71       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   7539327b49a77       kube-scheduler-addons-568105                kube-system
	5fc64e9c331d1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   917c69ef5faee       kube-controller-manager-addons-568105       kube-system
	f3d9e1dc84639       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   9c54b75ce508d       etcd-addons-568105                          kube-system
	c1f7c97ecb411       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   5d03c9ae2d4d0       kube-apiserver-addons-568105                kube-system
	
	
	==> coredns [df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae] <==
	[INFO] 10.244.0.13:56490 - 13829 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000141963s
	[INFO] 10.244.0.13:54111 - 44503 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090347s
	[INFO] 10.244.0.13:54111 - 44721 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131007s
	[INFO] 10.244.0.13:53840 - 46710 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000081056s
	[INFO] 10.244.0.13:53840 - 46430 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000069985s
	[INFO] 10.244.0.13:47664 - 42576 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000051387s
	[INFO] 10.244.0.13:47664 - 42402 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000086325s
	[INFO] 10.244.0.13:41886 - 1400 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000070785s
	[INFO] 10.244.0.13:41886 - 1552 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112274s
	[INFO] 10.244.0.13:44552 - 40173 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107302s
	[INFO] 10.244.0.13:44552 - 39788 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138253s
	[INFO] 10.244.0.21:44161 - 14610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200171s
	[INFO] 10.244.0.21:47558 - 59854 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000294008s
	[INFO] 10.244.0.21:49908 - 40260 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125803s
	[INFO] 10.244.0.21:34780 - 65472 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000218906s
	[INFO] 10.244.0.21:58220 - 51294 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119253s
	[INFO] 10.244.0.21:47224 - 44166 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156328s
	[INFO] 10.244.0.21:45372 - 1821 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006238635s
	[INFO] 10.244.0.21:54502 - 54653 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007296124s
	[INFO] 10.244.0.21:36206 - 36971 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004533083s
	[INFO] 10.244.0.21:48777 - 9670 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004880422s
	[INFO] 10.244.0.21:44170 - 62549 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005346265s
	[INFO] 10.244.0.21:47757 - 8392 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00611276s
	[INFO] 10.244.0.21:47135 - 12200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000745257s
	[INFO] 10.244.0.21:52096 - 18320 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001018088s
	
	
	==> describe nodes <==
	Name:               addons-568105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-568105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=addons-568105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T02_26_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-568105
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-568105"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 02:25:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-568105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 02:27:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 02:27:32 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 02:27:32 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 02:27:32 +0000   Tue, 16 Dec 2025 02:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 02:27:32 +0000   Tue, 16 Dec 2025 02:26:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-568105
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                23aff3bb-760a-437e-a58a-31de8eddbaa4
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-r5xh9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-qf8c2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gcp-auth                    gcp-auth-78565c9fb4-8dg8c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-dwmcj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         88s
	  kube-system                 amd-gpu-device-plugin-zpwqw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-cjv67                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-hd2bb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-568105                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-7cvb5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-addons-568105                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-addons-568105        200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-plzgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-addons-568105                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 metrics-server-85b7d694d7-v6wb9              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         89s
	  kube-system                 nvidia-device-plugin-daemonset-kzstn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-b7vlw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-d6sz6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-gx76q                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-cl5vk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-zzrsb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-72tvv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hsz94               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node addons-568105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node addons-568105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node addons-568105 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node addons-568105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node addons-568105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node addons-568105 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node addons-568105 event: Registered Node addons-568105 in Controller
	  Normal  NodeReady                49s                  kubelet          Node addons-568105 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001893] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386290] i8042: Warning: Keylock active
	[  +0.012328] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.496725] block sda: the capability attribute has been deprecated.
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079] <==
	{"level":"warn","ts":"2025-12-16T02:25:58.166359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.179929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.187163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.193535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.199880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.206295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.213562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.221275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.227267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.233394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.240228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.247144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.252994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.268018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.274691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.281297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:25:58.323584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:09.016304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:09.025085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.711913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.718493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.732162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:26:35.738319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:27:00.796273Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.707416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:27:00.796444Z","caller":"traceutil/trace.go:172","msg":"trace[1190644849] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:1031; }","duration":"127.88781ms","start":"2025-12-16T02:27:00.668541Z","end":"2025-12-16T02:27:00.796429Z","steps":["trace[1190644849] 'range keys from in-memory index tree'  (duration: 127.644072ms)"],"step_count":1}
	
	
	==> gcp-auth [7f1d5f2b42a6dd6c9827db63b9a36c72d5e9c37d17e0ee2e7879b7e1463a6149] <==
	2025/12/16 02:27:16 GCP Auth Webhook started!
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	2025/12/16 02:27:25 Ready to marshal response ...
	2025/12/16 02:27:25 Ready to write response ...
	
	
	==> kernel <==
	 02:27:36 up 10 min,  0 user,  load average: 1.38, 0.67, 0.25
	Linux addons-568105 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae] <==
	I1216 02:26:07.446450       1 main.go:148] setting mtu 1500 for CNI 
	I1216 02:26:07.446463       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 02:26:07.446483       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T02:26:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 02:26:07.677758       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 02:26:07.742854       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 02:26:07.742945       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 02:26:07.743172       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 02:26:37.743576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 02:26:37.744617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 02:26:37.744691       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 02:26:37.842228       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 02:26:39.243398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 02:26:39.243441       1 metrics.go:72] Registering metrics
	I1216 02:26:39.243484       1 controller.go:711] "Syncing nftables rules"
	I1216 02:26:47.681612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:26:47.681672       1 main.go:301] handling current node
	I1216 02:26:57.677375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:26:57.677497       1 main.go:301] handling current node
	I1216 02:27:07.677184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:27:07.677227       1 main.go:301] handling current node
	I1216 02:27:17.677977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:27:17.678370       1 main.go:301] handling current node
	I1216 02:27:27.677447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 02:27:27.677488       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed] <==
	E1216 02:26:57.067091       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 02:26:57.067152       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.144.234:443: connect: connection refused" logger="UnhandledError"
	E1216 02:26:57.068899       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.144.234:443: connect: connection refused" logger="UnhandledError"
	W1216 02:26:58.067173       1 handler_proxy.go:99] no RequestInfo found in the context
	W1216 02:26:58.067173       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 02:26:58.067290       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 02:26:58.067305       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1216 02:26:58.067332       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 02:26:58.068320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 02:26:58.773774       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 02:27:02.080836       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.144.234:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1216 02:27:02.081264       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 02:27:02.081311       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 02:27:02.097857       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 02:27:34.804627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59880: use of closed network connection
	E1216 02:27:34.951323       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59898: use of closed network connection
	
	
	==> kube-controller-manager [5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800] <==
	I1216 02:26:05.690020       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 02:26:05.690101       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 02:26:05.690409       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 02:26:05.690495       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 02:26:05.690532       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 02:26:05.690613       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 02:26:05.691430       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 02:26:05.691454       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 02:26:05.691484       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 02:26:05.692783       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 02:26:05.693316       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 02:26:05.698261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:05.700995       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 02:26:05.702901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:05.706165       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 02:26:05.715782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1216 02:26:07.706008       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 02:26:35.706701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 02:26:35.706869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 02:26:35.706941       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 02:26:35.722524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 02:26:35.726244       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 02:26:35.807635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:26:35.826934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 02:26:50.625224       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72] <==
	I1216 02:26:08.133354       1 server_linux.go:53] "Using iptables proxy"
	I1216 02:26:08.192955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 02:26:08.293723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 02:26:08.293752       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 02:26:08.293867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 02:26:08.313447       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 02:26:08.313506       1 server_linux.go:132] "Using iptables Proxier"
	I1216 02:26:08.318991       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 02:26:08.323886       1 server.go:527] "Version info" version="v1.34.2"
	I1216 02:26:08.323991       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 02:26:08.325631       1 config.go:200] "Starting service config controller"
	I1216 02:26:08.325655       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 02:26:08.325735       1 config.go:106] "Starting endpoint slice config controller"
	I1216 02:26:08.325767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 02:26:08.325897       1 config.go:309] "Starting node config controller"
	I1216 02:26:08.325915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 02:26:08.325923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 02:26:08.326185       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 02:26:08.326202       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 02:26:08.425850       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 02:26:08.425944       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 02:26:08.426325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b] <==
	E1216 02:25:58.730800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:25:58.730910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 02:25:58.731130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:25:58.731167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 02:25:58.731547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:25:58.731696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 02:25:58.731941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:25:58.731962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:25:58.732073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 02:25:58.732146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:25:58.732202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:25:58.732219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:25:58.732237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:25:58.732290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 02:25:58.732283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:25:58.732320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 02:25:58.732340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 02:25:58.732430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:25:59.623374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:25:59.665003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:25:59.704996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:25:59.715055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 02:25:59.799611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:25:59.844753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1216 02:26:02.429071       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 02:27:06 addons-568105 kubelet[1277]: I1216 02:27:06.285331    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx2rr\" (UniqueName: \"kubernetes.io/projected/422375a7-c90a-4ef1-96bc-73ada4a87492-kube-api-access-zx2rr\") pod \"422375a7-c90a-4ef1-96bc-73ada4a87492\" (UID: \"422375a7-c90a-4ef1-96bc-73ada4a87492\") "
	Dec 16 02:27:06 addons-568105 kubelet[1277]: I1216 02:27:06.287844    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422375a7-c90a-4ef1-96bc-73ada4a87492-kube-api-access-zx2rr" (OuterVolumeSpecName: "kube-api-access-zx2rr") pod "422375a7-c90a-4ef1-96bc-73ada4a87492" (UID: "422375a7-c90a-4ef1-96bc-73ada4a87492"). InnerVolumeSpecName "kube-api-access-zx2rr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 02:27:06 addons-568105 kubelet[1277]: I1216 02:27:06.386124    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zx2rr\" (UniqueName: \"kubernetes.io/projected/422375a7-c90a-4ef1-96bc-73ada4a87492-kube-api-access-zx2rr\") on node \"addons-568105\" DevicePath \"\""
	Dec 16 02:27:07 addons-568105 kubelet[1277]: I1216 02:27:07.105456    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0e2fd3c625f384b4e357d9eeda23282a51a9f162872286d0e1fd34f54671bb"
	Dec 16 02:27:08 addons-568105 kubelet[1277]: I1216 02:27:08.110883    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kzstn" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:08 addons-568105 kubelet[1277]: I1216 02:27:08.121722    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-kzstn" podStartSLOduration=1.689787649 podStartE2EDuration="21.121699882s" podCreationTimestamp="2025-12-16 02:26:47 +0000 UTC" firstStartedPulling="2025-12-16 02:26:48.360329321 +0000 UTC m=+47.566922797" lastFinishedPulling="2025-12-16 02:27:07.792241552 +0000 UTC m=+66.998835030" observedRunningTime="2025-12-16 02:27:08.121291413 +0000 UTC m=+67.327884909" watchObservedRunningTime="2025-12-16 02:27:08.121699882 +0000 UTC m=+67.328293378"
	Dec 16 02:27:09 addons-568105 kubelet[1277]: I1216 02:27:09.115945    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kzstn" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:09 addons-568105 kubelet[1277]: I1216 02:27:09.116053    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zpwqw" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:09 addons-568105 kubelet[1277]: I1216 02:27:09.131342    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-zpwqw" podStartSLOduration=2.05293608 podStartE2EDuration="22.131321176s" podCreationTimestamp="2025-12-16 02:26:47 +0000 UTC" firstStartedPulling="2025-12-16 02:26:48.369775988 +0000 UTC m=+47.576369465" lastFinishedPulling="2025-12-16 02:27:08.448161083 +0000 UTC m=+67.654754561" observedRunningTime="2025-12-16 02:27:09.130189777 +0000 UTC m=+68.336783272" watchObservedRunningTime="2025-12-16 02:27:09.131321176 +0000 UTC m=+68.337914672"
	Dec 16 02:27:10 addons-568105 kubelet[1277]: I1216 02:27:10.121489    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zpwqw" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:12 addons-568105 kubelet[1277]: I1216 02:27:12.133212    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gx76q" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:13 addons-568105 kubelet[1277]: I1216 02:27:13.136605    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gx76q" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:27:15 addons-568105 kubelet[1277]: I1216 02:27:15.156808    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-dwmcj" podStartSLOduration=56.069518338 podStartE2EDuration="1m7.156791615s" podCreationTimestamp="2025-12-16 02:26:08 +0000 UTC" firstStartedPulling="2025-12-16 02:27:03.892171008 +0000 UTC m=+63.098764494" lastFinishedPulling="2025-12-16 02:27:14.979444275 +0000 UTC m=+74.186037771" observedRunningTime="2025-12-16 02:27:15.156417324 +0000 UTC m=+74.363010821" watchObservedRunningTime="2025-12-16 02:27:15.156791615 +0000 UTC m=+74.363385126"
	Dec 16 02:27:15 addons-568105 kubelet[1277]: I1216 02:27:15.157721    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-gx76q" podStartSLOduration=5.116198146 podStartE2EDuration="28.157709044s" podCreationTimestamp="2025-12-16 02:26:47 +0000 UTC" firstStartedPulling="2025-12-16 02:26:48.428049942 +0000 UTC m=+47.634643417" lastFinishedPulling="2025-12-16 02:27:11.469560841 +0000 UTC m=+70.676154315" observedRunningTime="2025-12-16 02:27:12.151455676 +0000 UTC m=+71.358049172" watchObservedRunningTime="2025-12-16 02:27:15.157709044 +0000 UTC m=+74.364302552"
	Dec 16 02:27:17 addons-568105 kubelet[1277]: I1216 02:27:17.166905    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-8dg8c" podStartSLOduration=50.334047204 podStartE2EDuration="1m3.166885102s" podCreationTimestamp="2025-12-16 02:26:14 +0000 UTC" firstStartedPulling="2025-12-16 02:27:03.898249046 +0000 UTC m=+63.104842522" lastFinishedPulling="2025-12-16 02:27:16.731086942 +0000 UTC m=+75.937680420" observedRunningTime="2025-12-16 02:27:17.166692849 +0000 UTC m=+76.373286344" watchObservedRunningTime="2025-12-16 02:27:17.166885102 +0000 UTC m=+76.373478598"
	Dec 16 02:27:19 addons-568105 kubelet[1277]: E1216 02:27:19.793864    1277 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 16 02:27:19 addons-568105 kubelet[1277]: E1216 02:27:19.793975    1277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e986d132-c5e7-42d8-b08d-ede7ad0a002a-gcr-creds podName:e986d132-c5e7-42d8-b08d-ede7ad0a002a nodeName:}" failed. No retries permitted until 2025-12-16 02:27:51.793953453 +0000 UTC m=+111.000546945 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e986d132-c5e7-42d8-b08d-ede7ad0a002a-gcr-creds") pod "registry-creds-764b6fb674-d6sz6" (UID: "e986d132-c5e7-42d8-b08d-ede7ad0a002a") : secret "registry-creds-gcr" not found
	Dec 16 02:27:20 addons-568105 kubelet[1277]: I1216 02:27:20.944843    1277 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 16 02:27:20 addons-568105 kubelet[1277]: I1216 02:27:20.944894    1277 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 16 02:27:22 addons-568105 kubelet[1277]: I1216 02:27:22.178033    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qf8c2" podStartSLOduration=67.889652786 podStartE2EDuration="1m15.178015225s" podCreationTimestamp="2025-12-16 02:26:07 +0000 UTC" firstStartedPulling="2025-12-16 02:27:12.115575368 +0000 UTC m=+71.322168857" lastFinishedPulling="2025-12-16 02:27:19.403937814 +0000 UTC m=+78.610531296" observedRunningTime="2025-12-16 02:27:20.181275045 +0000 UTC m=+79.387868541" watchObservedRunningTime="2025-12-16 02:27:22.178015225 +0000 UTC m=+81.384608724"
	Dec 16 02:27:23 addons-568105 kubelet[1277]: I1216 02:27:23.204074    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-hd2bb" podStartSLOduration=1.663678028 podStartE2EDuration="36.204053162s" podCreationTimestamp="2025-12-16 02:26:47 +0000 UTC" firstStartedPulling="2025-12-16 02:26:48.371369827 +0000 UTC m=+47.577963306" lastFinishedPulling="2025-12-16 02:27:22.911744964 +0000 UTC m=+82.118338440" observedRunningTime="2025-12-16 02:27:23.203247331 +0000 UTC m=+82.409840843" watchObservedRunningTime="2025-12-16 02:27:23.204053162 +0000 UTC m=+82.410646660"
	Dec 16 02:27:25 addons-568105 kubelet[1277]: I1216 02:27:25.735778    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/12352787-47ea-402d-9f11-e5894590b258-gcp-creds\") pod \"busybox\" (UID: \"12352787-47ea-402d-9f11-e5894590b258\") " pod="default/busybox"
	Dec 16 02:27:25 addons-568105 kubelet[1277]: I1216 02:27:25.735840    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzt8\" (UniqueName: \"kubernetes.io/projected/12352787-47ea-402d-9f11-e5894590b258-kube-api-access-gwzt8\") pod \"busybox\" (UID: \"12352787-47ea-402d-9f11-e5894590b258\") " pod="default/busybox"
	Dec 16 02:27:28 addons-568105 kubelet[1277]: I1216 02:27:28.224625    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9386228220000001 podStartE2EDuration="3.224605358s" podCreationTimestamp="2025-12-16 02:27:25 +0000 UTC" firstStartedPulling="2025-12-16 02:27:25.989791139 +0000 UTC m=+85.196384614" lastFinishedPulling="2025-12-16 02:27:27.275773675 +0000 UTC m=+86.482367150" observedRunningTime="2025-12-16 02:27:28.223941079 +0000 UTC m=+87.430534579" watchObservedRunningTime="2025-12-16 02:27:28.224605358 +0000 UTC m=+87.431198854"
	Dec 16 02:27:34 addons-568105 kubelet[1277]: I1216 02:27:34.885626    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f77586-691c-4d1c-a2a1-50b169ff5b86" path="/var/lib/kubelet/pods/81f77586-691c-4d1c-a2a1-50b169ff5b86/volumes"
	
	
	==> storage-provisioner [ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b] <==
	W1216 02:27:12.698043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:14.701088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:14.705116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:16.707680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:16.711168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:18.714853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:18.720204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:20.722903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:20.726945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:22.729867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:22.733751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:24.736432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:24.739791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:26.742281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:26.745471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:28.748392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:28.752087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:30.754647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:30.757983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:32.760675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:32.763878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:34.766971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:34.773257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:36.775638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:27:36.779461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-568105 -n addons-568105
helpers_test.go:270: (dbg) Run:  kubectl --context addons-568105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c registry-creds-764b6fb674-d6sz6
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c registry-creds-764b6fb674-d6sz6
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c registry-creds-764b6fb674-d6sz6: exit status 1 (57.775177ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b9ppx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-btk4c" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d6sz6" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-568105 describe pod ingress-nginx-admission-create-b9ppx ingress-nginx-admission-patch-btk4c registry-creds-764b6fb674-d6sz6: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable headlamp --alsologtostderr -v=1: exit status 11 (240.20282ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:37.466186   19281 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:37.466329   19281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:37.466337   19281 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:37.466341   19281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:37.466516   19281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:37.466749   19281 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:37.467065   19281 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:37.467084   19281 addons.go:622] checking whether the cluster is paused
	I1216 02:27:37.467168   19281 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:37.467179   19281 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:37.467512   19281 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:37.485180   19281 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:37.485230   19281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:37.502614   19281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:37.598196   19281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:37.598288   19281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:37.625801   19281 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:37.625841   19281 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:37.625848   19281 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:37.625853   19281 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:37.625857   19281 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:37.625862   19281 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:37.625867   19281 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:37.625871   19281 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:37.625876   19281 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:37.625884   19281 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:37.625890   19281 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:37.625904   19281 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:37.625910   19281 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:37.625913   19281 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:37.625916   19281 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:37.625920   19281 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:37.625923   19281 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:37.625926   19281 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:37.625929   19281 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:37.625932   19281 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:37.625936   19281 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:37.625939   19281 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:37.625942   19281 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:37.625944   19281 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:37.625947   19281 cri.go:89] found id: ""
	I1216 02:27:37.625982   19281 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:37.639620   19281 out.go:203] 
	W1216 02:27:37.641963   19281 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:37.641978   19281 out.go:285] * 
	* 
	W1216 02:27:37.644960   19281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:37.646301   19281 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-r5xh9" [bf6d8b02-072a-47e2-9270-838fc4697bd6] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003744118s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (237.178318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:46.514876   20810 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:46.515167   20810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:46.515178   20810 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:46.515205   20810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:46.515418   20810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:46.515687   20810 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:46.516026   20810 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:46.516049   20810 addons.go:622] checking whether the cluster is paused
	I1216 02:27:46.516140   20810 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:46.516155   20810 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:46.516562   20810 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:46.534292   20810 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:46.534344   20810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:46.551409   20810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:46.647303   20810 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:46.647395   20810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:46.675761   20810 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:46.675786   20810 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:46.675792   20810 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:46.675798   20810 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:46.675803   20810 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:46.675809   20810 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:46.675814   20810 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:46.675847   20810 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:46.675853   20810 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:46.675865   20810 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:46.675893   20810 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:46.675899   20810 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:46.675909   20810 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:46.675915   20810 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:46.675924   20810 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:46.675934   20810 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:46.675943   20810 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:46.675951   20810 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:46.675956   20810 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:46.675961   20810 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:46.675966   20810 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:46.675971   20810 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:46.675979   20810 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:46.675985   20810 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:46.675991   20810 cri.go:89] found id: ""
	I1216 02:27:46.676044   20810 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:46.689656   20810 out.go:203] 
	W1216 02:27:46.690983   20810 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:46.691003   20810 out.go:285] * 
	* 
	W1216 02:27:46.693868   20810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:46.695076   20810 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-568105 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-568105 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-568105 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [464f8c28-5e0e-467d-a128-6f3f9734a352] Pending
helpers_test.go:353: "test-local-path" [464f8c28-5e0e-467d-a128-6f3f9734a352] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [464f8c28-5e0e-467d-a128-6f3f9734a352] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [464f8c28-5e0e-467d-a128-6f3f9734a352] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003198458s
addons_test.go:969: (dbg) Run:  kubectl --context addons-568105 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 ssh "cat /opt/local-path-provisioner/pvc-aac4cbfb-90a8-4cdd-bbae-a3dd306f3bb5_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-568105 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-568105 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (242.927984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:45.563479   20696 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:45.563637   20696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:45.563647   20696 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:45.563651   20696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:45.563862   20696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:45.564146   20696 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:45.564474   20696 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:45.564492   20696 addons.go:622] checking whether the cluster is paused
	I1216 02:27:45.564573   20696 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:45.564584   20696 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:45.564969   20696 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:45.582489   20696 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:45.582541   20696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:45.600190   20696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:45.697375   20696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:45.697443   20696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:45.725864   20696 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:45.725884   20696 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:45.725888   20696 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:45.725891   20696 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:45.725894   20696 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:45.725898   20696 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:45.725901   20696 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:45.725903   20696 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:45.725906   20696 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:45.725912   20696 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:45.725915   20696 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:45.725918   20696 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:45.725921   20696 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:45.725924   20696 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:45.725933   20696 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:45.725941   20696 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:45.725944   20696 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:45.725948   20696 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:45.725951   20696 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:45.725953   20696 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:45.725961   20696 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:45.725966   20696 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:45.725969   20696 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:45.725976   20696 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:45.725981   20696 cri.go:89] found id: ""
	I1216 02:27:45.726020   20696 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:45.738987   20696 out.go:203] 
	W1216 02:27:45.740061   20696 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:45.740081   20696 out.go:285] * 
	* 
	W1216 02:27:45.745121   20696 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:45.746963   20696 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-kzstn" [22f42d03-6c10-402e-932b-11e904a9bb3c] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003548622s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (251.739158ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:40.257175   19469 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:40.257504   19469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:40.257517   19469 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:40.257524   19469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:40.257798   19469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:40.258171   19469 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:40.258542   19469 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:40.258571   19469 addons.go:622] checking whether the cluster is paused
	I1216 02:27:40.258662   19469 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:40.258675   19469 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:40.259177   19469 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:40.280749   19469 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:40.280795   19469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:40.302326   19469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:40.401263   19469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:40.401341   19469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:40.430052   19469 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:40.430080   19469 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:40.430085   19469 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:40.430089   19469 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:40.430092   19469 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:40.430096   19469 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:40.430099   19469 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:40.430110   19469 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:40.430113   19469 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:40.430138   19469 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:40.430143   19469 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:40.430146   19469 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:40.430149   19469 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:40.430152   19469 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:40.430155   19469 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:40.430166   19469 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:40.430173   19469 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:40.430184   19469 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:40.430187   19469 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:40.430190   19469 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:40.430193   19469 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:40.430196   19469 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:40.430199   19469 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:40.430202   19469 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:40.430204   19469 cri.go:89] found id: ""
	I1216 02:27:40.430256   19469 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:40.444297   19469 out.go:203] 
	W1216 02:27:40.445465   19469 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:40.445486   19469 out.go:285] * 
	* 
	W1216 02:27:40.448366   19469 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:40.449553   19469 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-hsz94" [1f081f40-48ea-4159-8a8f-98b8a3a22c20] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003651228s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable yakd --alsologtostderr -v=1: exit status 11 (248.265338ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:57.056947   21701 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:57.057207   21701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:57.057218   21701 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:57.057222   21701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:57.057415   21701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:57.057655   21701 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:57.057994   21701 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:57.058016   21701 addons.go:622] checking whether the cluster is paused
	I1216 02:27:57.058099   21701 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:57.058111   21701 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:57.058470   21701 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:57.076366   21701 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:57.076455   21701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:57.094050   21701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:57.191783   21701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:57.191872   21701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:57.223133   21701 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:27:57.223168   21701 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:57.223174   21701 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:57.223179   21701 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:57.223184   21701 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:57.223189   21701 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:57.223194   21701 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:57.223198   21701 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:57.223203   21701 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:57.223210   21701 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:57.223215   21701 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:57.223220   21701 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:57.223225   21701 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:57.223230   21701 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:57.223234   21701 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:57.223245   21701 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:57.223253   21701 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:57.223260   21701 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:57.223264   21701 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:57.223269   21701 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:57.223273   21701 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:57.223277   21701 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:57.223282   21701 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:57.223287   21701 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:57.223292   21701 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:57.223296   21701 cri.go:89] found id: ""
	I1216 02:27:57.223341   21701 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:57.241968   21701 out.go:203] 
	W1216 02:27:57.243254   21701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:57.243294   21701 out.go:285] * 
	* 
	W1216 02:27:57.247915   21701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:57.249300   21701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-zpwqw" [493d2ba0-418b-49e5-aab2-a024a03781af] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002893596s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-568105 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-568105 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (237.969962ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:27:55.206094   21618 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:27:55.206380   21618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:55.206391   21618 out.go:374] Setting ErrFile to fd 2...
	I1216 02:27:55.206395   21618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:27:55.206637   21618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:27:55.206961   21618 mustload.go:66] Loading cluster: addons-568105
	I1216 02:27:55.207327   21618 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:55.207347   21618 addons.go:622] checking whether the cluster is paused
	I1216 02:27:55.207441   21618 config.go:182] Loaded profile config "addons-568105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:27:55.207457   21618 host.go:66] Checking if "addons-568105" exists ...
	I1216 02:27:55.207913   21618 cli_runner.go:164] Run: docker container inspect addons-568105 --format={{.State.Status}}
	I1216 02:27:55.225800   21618 ssh_runner.go:195] Run: systemctl --version
	I1216 02:27:55.225867   21618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-568105
	I1216 02:27:55.243425   21618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/addons-568105/id_rsa Username:docker}
	I1216 02:27:55.339191   21618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:27:55.339277   21618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:27:55.367980   21618 cri.go:89] found id: "4efff691fcef802737c6fd1fa0c742d52a1b12d293a75b61aebf6b333a341078"
	I1216 02:27:55.368009   21618 cri.go:89] found id: "5a9662216a426360124c1eecccc23cde06e141f818f1da892a5cce27a15b894e"
	I1216 02:27:55.368016   21618 cri.go:89] found id: "7d237fed170b0f74115bcae2405563d1ac53fbba532b443ffce5fbf944cab010"
	I1216 02:27:55.368022   21618 cri.go:89] found id: "ae9b9276f546bdbed442acf3500522ea5fdfefe75f9c36125e0537f71d441bf3"
	I1216 02:27:55.368027   21618 cri.go:89] found id: "7e47591ff793171714cfdb06357fdcfdfe4b4f41225acf64606240753eecf39d"
	I1216 02:27:55.368039   21618 cri.go:89] found id: "e1658f146a4d2c8c6f2e33de73a27335d9135447e4d2a3663c4a837bd2e4253c"
	I1216 02:27:55.368043   21618 cri.go:89] found id: "978c45196be4330b838d2476a50a78e3ad07cbdadd2d823a83ca5a10d648fa62"
	I1216 02:27:55.368048   21618 cri.go:89] found id: "dbded21ce9b6ca087fac5c7db5a0fcf1eebde7a8facf68593339c73a92b85008"
	I1216 02:27:55.368052   21618 cri.go:89] found id: "5258c264d4ef16d886b758351ff7757a18ec40aa60967470d194f37dadc567d2"
	I1216 02:27:55.368062   21618 cri.go:89] found id: "f2b1c7c11696c2ed5d7565ec1778e3d7c13e31b1024569ff1500184a90e5b185"
	I1216 02:27:55.368068   21618 cri.go:89] found id: "1034828f8f00695ee08eff06512edf2ebbfbb6a1638f63bac1976eeda5d9d7f9"
	I1216 02:27:55.368071   21618 cri.go:89] found id: "51cd2f7227a668a2ee51c6b9e4e3e4494b28f3d979a0cbb9c8819b6c63e67a01"
	I1216 02:27:55.368073   21618 cri.go:89] found id: "f07eb262fc567ada8bfb1b4dfd0d707476ea598eb9e480a28771fc8fb3a54650"
	I1216 02:27:55.368076   21618 cri.go:89] found id: "c790a5dda1f082ce1cbc591ef52d8a4064dc47c41c2f3f367e66bbf2ecb90c3e"
	I1216 02:27:55.368078   21618 cri.go:89] found id: "c3d2e4a1a0c55839499f9a579a9a7d687f4f2ff10423c42303b4a6824eac07b6"
	I1216 02:27:55.368085   21618 cri.go:89] found id: "aacc04b82103ab6be3ac76048f63aa0373dcb861e2e3979032c82989df2ece84"
	I1216 02:27:55.368090   21618 cri.go:89] found id: "4e4882ff4f3f093bbcdf556964fa2c00b4c2d29e722fa4322271de85562e6a59"
	I1216 02:27:55.368095   21618 cri.go:89] found id: "df8bdac96f7e849ec2a5a0bdd5fc92c64a3ee39022029c0ca3f7a45e8aa12fae"
	I1216 02:27:55.368099   21618 cri.go:89] found id: "ae4534dbc38ecdfc6aa9aaffeaddb45f95747884b0b891a35040aad2243cf75b"
	I1216 02:27:55.368104   21618 cri.go:89] found id: "4472bad932d447d3426e75459dfc64d343889786e8c144580ed42cf14962fe72"
	I1216 02:27:55.368109   21618 cri.go:89] found id: "42bdabbf350a03098dead7dc7970d1ce2b5ea563243abd69d8bb4dfa0fcd9cae"
	I1216 02:27:55.368114   21618 cri.go:89] found id: "168b7336b0d71c8152188aba803f5f2ee707ddbcbf9b154d815d3d227afd1e9b"
	I1216 02:27:55.368119   21618 cri.go:89] found id: "5fc64e9c331d15e02724df805b2055489735cbf5fc07557f4b9785340a6ce800"
	I1216 02:27:55.368123   21618 cri.go:89] found id: "f3d9e1dc84639370c17020ac7e17f55de43448d54e5a20946f972a861de70079"
	I1216 02:27:55.368131   21618 cri.go:89] found id: "c1f7c97ecb4111c8ddce7be47a9bee1cfd82a2fc7dc190997b7e9bdc1664fbed"
	I1216 02:27:55.368136   21618 cri.go:89] found id: ""
	I1216 02:27:55.368184   21618 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:27:55.381765   21618 out.go:203] 
	W1216 02:27:55.382906   21618 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:27:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:27:55.382923   21618 out.go:285] * 
	* 
	W1216 02:27:55.385814   21618 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:27:55.386903   21618 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-568105 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image ls --format short --alsologtostderr: (2.331276988s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-781918 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-781918 image ls --format short --alsologtostderr:
I1216 02:33:32.995518   48604 out.go:360] Setting OutFile to fd 1 ...
I1216 02:33:32.995879   48604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:32.995891   48604 out.go:374] Setting ErrFile to fd 2...
I1216 02:33:32.995898   48604 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:32.996189   48604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:33:32.996930   48604 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:32.997027   48604 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:32.997609   48604 cli_runner.go:164] Run: docker container inspect functional-781918 --format={{.State.Status}}
I1216 02:33:33.021583   48604 ssh_runner.go:195] Run: systemctl --version
I1216 02:33:33.021641   48604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-781918
I1216 02:33:33.044404   48604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-781918/id_rsa Username:docker}
I1216 02:33:33.153772   48604 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:33:35.189108   48604 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.035303664s)
W1216 02:33:35.189167   48604 cache_images.go:736] Failed to list images for profile functional-781918 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 02:33:35.187009    7261 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-16T02:33:35Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image ls --format table --alsologtostderr: (2.252849123s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-781918 image ls --format table --alsologtostderr:
┌───────┬─────┬──────────┬──────┐
│ IMAGE │ TAG │ IMAGE ID │ SIZE │
└───────┴─────┴──────────┴──────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-781918 image ls --format table --alsologtostderr:
I1216 02:33:35.347645   49091 out.go:360] Setting OutFile to fd 1 ...
I1216 02:33:35.357981   49091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:35.357998   49091 out.go:374] Setting ErrFile to fd 2...
I1216 02:33:35.358006   49091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:35.358227   49091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:33:35.358801   49091 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:35.358938   49091 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:35.359519   49091 cli_runner.go:164] Run: docker container inspect functional-781918 --format={{.State.Status}}
I1216 02:33:35.379612   49091 ssh_runner.go:195] Run: systemctl --version
I1216 02:33:35.379669   49091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-781918
I1216 02:33:35.396857   49091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-781918/id_rsa Username:docker}
I1216 02:33:35.496057   49091 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:33:37.530149   49091 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.03403999s)
W1216 02:33:37.530262   49091 cache_images.go:736] Failed to list images for profile functional-781918 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 02:33:37.527117    7472 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-16T02:33:37Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected │ registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image ls --format json --alsologtostderr: (2.237440248s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-781918 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-781918 image ls --format json --alsologtostderr:
I1216 02:33:35.281898   49078 out.go:360] Setting OutFile to fd 1 ...
I1216 02:33:35.284750   49078 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:35.284764   49078 out.go:374] Setting ErrFile to fd 2...
I1216 02:33:35.284768   49078 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:35.284954   49078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:33:35.285559   49078 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:35.285674   49078 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:35.286164   49078 cli_runner.go:164] Run: docker container inspect functional-781918 --format={{.State.Status}}
I1216 02:33:35.304955   49078 ssh_runner.go:195] Run: systemctl --version
I1216 02:33:35.305001   49078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-781918
I1216 02:33:35.323795   49078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-781918/id_rsa Username:docker}
I1216 02:33:35.423387   49078 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:33:37.450315   49078 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.026893767s)
W1216 02:33:37.450395   49078 cache_images.go:736] Failed to list images for profile functional-781918 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 02:33:37.447895    7439 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-16T02:33:37Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image ls --format yaml --alsologtostderr: (2.370674171s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-781918 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-781918 image ls --format yaml --alsologtostderr:
I1216 02:33:33.002612   48626 out.go:360] Setting OutFile to fd 1 ...
I1216 02:33:33.002739   48626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:33.002752   48626 out.go:374] Setting ErrFile to fd 2...
I1216 02:33:33.002759   48626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:33.003122   48626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:33:33.003944   48626 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:33.004090   48626 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:33.004738   48626 cli_runner.go:164] Run: docker container inspect functional-781918 --format={{.State.Status}}
I1216 02:33:33.029305   48626 ssh_runner.go:195] Run: systemctl --version
I1216 02:33:33.029377   48626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-781918
I1216 02:33:33.056047   48626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-781918/id_rsa Username:docker}
I1216 02:33:33.163791   48626 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:33:35.197512   48626 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.033693085s)
W1216 02:33:35.197570   48626 cache_images.go:736] Failed to list images for profile functional-781918 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 02:33:35.195422    7267 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-16T02:33:35Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.37s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-620276 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-620276 --output=json --user=testUser: exit status 80 (2.435469858s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2946927-bc11-4853-9bd6-988db13cf319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-620276 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"63910ab1-962b-4184-b6f7-ec30898c333c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T02:45:20Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"28970320-d120-4ea8-8516-c3e7bc9c85cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-620276 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-620276 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-620276 --output=json --user=testUser: exit status 80 (2.240887289s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"66d920d9-1958-4859-9417-581d11033e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-620276 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aaf4e68b-026b-4281-b620-7a0426de5b72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T02:45:22Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"3cc20b26-3cee-4ff3-8b16-addc82a4da63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-620276 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.24s)

                                                
                                    
x
+
TestPause/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-837191 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-837191 --alsologtostderr -v=5: exit status 80 (2.512704661s)

                                                
                                                
-- stdout --
	* Pausing node pause-837191 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:59:30.794748  211562 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:59:30.794996  211562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:30.795007  211562 out.go:374] Setting ErrFile to fd 2...
	I1216 02:59:30.795011  211562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:30.795196  211562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:59:30.795435  211562 out.go:368] Setting JSON to false
	I1216 02:59:30.795452  211562 mustload.go:66] Loading cluster: pause-837191
	I1216 02:59:30.795802  211562 config.go:182] Loaded profile config "pause-837191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:30.796269  211562 cli_runner.go:164] Run: docker container inspect pause-837191 --format={{.State.Status}}
	I1216 02:59:30.819991  211562 host.go:66] Checking if "pause-837191" exists ...
	I1216 02:59:30.820365  211562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:59:30.885696  211562 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-16 02:59:30.875367981 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:59:30.886625  211562 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-837191 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 02:59:30.888599  211562 out.go:179] * Pausing node pause-837191 ... 
	I1216 02:59:30.889957  211562 host.go:66] Checking if "pause-837191" exists ...
	I1216 02:59:30.890229  211562 ssh_runner.go:195] Run: systemctl --version
	I1216 02:59:30.890273  211562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:30.908696  211562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:31.010238  211562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:59:31.023060  211562 pause.go:52] kubelet running: true
	I1216 02:59:31.023127  211562 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 02:59:31.156555  211562 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 02:59:31.156664  211562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 02:59:31.238661  211562 cri.go:89] found id: "bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76"
	I1216 02:59:31.238692  211562 cri.go:89] found id: "b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2"
	I1216 02:59:31.238698  211562 cri.go:89] found id: "88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38"
	I1216 02:59:31.238703  211562 cri.go:89] found id: "5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1"
	I1216 02:59:31.238708  211562 cri.go:89] found id: "1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788"
	I1216 02:59:31.238713  211562 cri.go:89] found id: "b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5"
	I1216 02:59:31.238718  211562 cri.go:89] found id: "8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa"
	I1216 02:59:31.238722  211562 cri.go:89] found id: ""
	I1216 02:59:31.238760  211562 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:59:31.254684  211562 retry.go:31] will retry after 200.290379ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:31Z" level=error msg="open /run/runc: no such file or directory"
	I1216 02:59:31.456032  211562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:59:31.471141  211562 pause.go:52] kubelet running: false
	I1216 02:59:31.471207  211562 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 02:59:31.606716  211562 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 02:59:31.606832  211562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 02:59:31.680604  211562 cri.go:89] found id: "bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76"
	I1216 02:59:31.680623  211562 cri.go:89] found id: "b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2"
	I1216 02:59:31.680627  211562 cri.go:89] found id: "88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38"
	I1216 02:59:31.680630  211562 cri.go:89] found id: "5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1"
	I1216 02:59:31.680633  211562 cri.go:89] found id: "1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788"
	I1216 02:59:31.680636  211562 cri.go:89] found id: "b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5"
	I1216 02:59:31.680639  211562 cri.go:89] found id: "8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa"
	I1216 02:59:31.680641  211562 cri.go:89] found id: ""
	I1216 02:59:31.680677  211562 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:59:31.695444  211562 retry.go:31] will retry after 465.590353ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:31Z" level=error msg="open /run/runc: no such file or directory"
	I1216 02:59:32.162047  211562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:59:32.174674  211562 pause.go:52] kubelet running: false
	I1216 02:59:32.174722  211562 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 02:59:32.285229  211562 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 02:59:32.285293  211562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 02:59:32.354752  211562 cri.go:89] found id: "bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76"
	I1216 02:59:32.354774  211562 cri.go:89] found id: "b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2"
	I1216 02:59:32.354781  211562 cri.go:89] found id: "88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38"
	I1216 02:59:32.354786  211562 cri.go:89] found id: "5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1"
	I1216 02:59:32.354791  211562 cri.go:89] found id: "1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788"
	I1216 02:59:32.354795  211562 cri.go:89] found id: "b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5"
	I1216 02:59:32.354799  211562 cri.go:89] found id: "8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa"
	I1216 02:59:32.354803  211562 cri.go:89] found id: ""
	I1216 02:59:32.354870  211562 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:59:32.366547  211562 retry.go:31] will retry after 640.993715ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:32Z" level=error msg="open /run/runc: no such file or directory"
	I1216 02:59:33.008018  211562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:59:33.022159  211562 pause.go:52] kubelet running: false
	I1216 02:59:33.022230  211562 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 02:59:33.138513  211562 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 02:59:33.138593  211562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 02:59:33.208927  211562 cri.go:89] found id: "bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76"
	I1216 02:59:33.208953  211562 cri.go:89] found id: "b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2"
	I1216 02:59:33.208959  211562 cri.go:89] found id: "88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38"
	I1216 02:59:33.208964  211562 cri.go:89] found id: "5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1"
	I1216 02:59:33.208968  211562 cri.go:89] found id: "1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788"
	I1216 02:59:33.208973  211562 cri.go:89] found id: "b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5"
	I1216 02:59:33.208977  211562 cri.go:89] found id: "8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa"
	I1216 02:59:33.208982  211562 cri.go:89] found id: ""
	I1216 02:59:33.209038  211562 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 02:59:33.223047  211562 out.go:203] 
	W1216 02:59:33.224108  211562 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 02:59:33.224126  211562 out.go:285] * 
	* 
	W1216 02:59:33.230772  211562 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:59:33.232065  211562 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-837191 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-837191
helpers_test.go:244: (dbg) docker inspect pause-837191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71",
	        "Created": "2025-12-16T02:58:42.695717357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:58:43.165384346Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/hostname",
	        "HostsPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/hosts",
	        "LogPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71-json.log",
	        "Name": "/pause-837191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-837191:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-837191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71",
	                "LowerDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-837191",
	                "Source": "/var/lib/docker/volumes/pause-837191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-837191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-837191",
	                "name.minikube.sigs.k8s.io": "pause-837191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3bf8b98c76f0422d2025ddd996cd4d08a9fa597def55f7aff705bfe1caae86c1",
	            "SandboxKey": "/var/run/docker/netns/3bf8b98c76f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-837191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8a06604d014961f2f4dab4932da9bc10e4eabf846faad30337573f8dda24095",
	                    "EndpointID": "ab272c4dc124edf219acb74b2f8ebbc028ab8ce5ed98ddd1f9ee93c1b919781a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:38:b1:41:8b:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-837191",
	                        "64b06a5a5665"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-837191 -n pause-837191
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-837191 -n pause-837191: exit status 2 (346.712168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-837191 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-837191 logs -n 25: (1.03269835s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-708409 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --cancel-scheduled                                                                                 │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │ 16 Dec 25 02:57 UTC │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │ 16 Dec 25 02:57 UTC │
	│ delete  │ -p scheduled-stop-708409                                                                                                    │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:58 UTC │
	│ start   │ -p insufficient-storage-058217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-058217 │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │                     │
	│ delete  │ -p insufficient-storage-058217                                                                                              │ insufficient-storage-058217 │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:58 UTC │
	│ start   │ -p pause-837191 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p force-systemd-env-849216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-849216    │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p offline-crio-827391 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-827391         │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p stopped-upgrade-863865 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-863865      │ jenkins │ v1.35.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ delete  │ -p force-systemd-env-849216                                                                                                 │ force-systemd-env-849216    │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ stop    │ stopped-upgrade-863865 stop                                                                                                 │ stopped-upgrade-863865      │ jenkins │ v1.35.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p force-systemd-flag-546137 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p stopped-upgrade-863865 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-863865      │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ delete  │ -p offline-crio-827391                                                                                                      │ offline-crio-827391         │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p cert-expiration-332150 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-332150      │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ start   │ -p pause-837191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ pause   │ -p pause-837191 --alsologtostderr -v=5                                                                                      │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ ssh     │ force-systemd-flag-546137 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ delete  │ -p force-systemd-flag-546137                                                                                                │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:59:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:59:23.233292  209171 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:59:23.233607  209171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:23.233618  209171 out.go:374] Setting ErrFile to fd 2...
	I1216 02:59:23.233623  209171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:23.233952  209171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:59:23.234502  209171 out.go:368] Setting JSON to false
	I1216 02:59:23.235960  209171 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2515,"bootTime":1765851448,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:59:23.236033  209171 start.go:143] virtualization: kvm guest
	I1216 02:59:23.318265  209171 out.go:179] * [pause-837191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:59:23.409569  209171 notify.go:221] Checking for updates...
	I1216 02:59:23.483669  209171 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:59:23.626318  209171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:59:23.782516  209171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:59:23.859864  209171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:59:23.935606  209171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:59:23.954347  209171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:59:23.956893  209171 config.go:182] Loaded profile config "pause-837191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:23.957636  209171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:59:23.980872  209171 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:59:23.981013  209171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:59:24.039156  209171 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-16 02:59:24.029448813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:59:24.039294  209171 docker.go:319] overlay module found
	I1216 02:59:24.245989  209171 out.go:179] * Using the docker driver based on existing profile
	I1216 02:59:24.268305  209171 start.go:309] selected driver: docker
	I1216 02:59:24.268330  209171 start.go:927] validating driver "docker" against &{Name:pause-837191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-837191 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:59:24.268479  209171 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:59:24.268587  209171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:59:24.322956  209171 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-16 02:59:24.313906605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:59:24.323637  209171 cni.go:84] Creating CNI manager for ""
	I1216 02:59:24.323716  209171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:59:24.323775  209171 start.go:353] cluster config:
	{Name:pause-837191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-837191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:59:22.377601  203984 out.go:252]   - Booting up control plane ...
	I1216 02:59:22.377750  203984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:59:22.377888  203984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:59:22.381018  203984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:59:22.398858  203984 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:59:22.399008  203984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:59:22.407768  203984 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:59:22.408465  203984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:59:22.408542  203984 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:59:22.533101  203984 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:59:22.533233  203984 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:59:23.034634  203984 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.459653ms
	I1216 02:59:23.047536  203984 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 02:59:23.047840  203984 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 02:59:23.047989  203984 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 02:59:23.048121  203984 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 02:59:24.428521  209171 out.go:179] * Starting "pause-837191" primary control-plane node in "pause-837191" cluster
	I1216 02:59:24.553205  209171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 02:59:24.614222  209171 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:59:24.617999  209171 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:59:24.618055  209171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:59:24.618059  209171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:59:24.618089  209171 cache.go:65] Caching tarball of preloaded images
	I1216 02:59:24.618210  209171 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 02:59:24.618226  209171 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 02:59:24.618396  209171 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/config.json ...
	I1216 02:59:24.639165  209171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:59:24.639184  209171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:59:24.639201  209171 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:59:24.639228  209171 start.go:360] acquireMachinesLock for pause-837191: {Name:mkc1719115a1db4c05c8c9b366c2d745a021f647 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:59:24.639282  209171 start.go:364] duration metric: took 37.575µs to acquireMachinesLock for "pause-837191"
	I1216 02:59:24.639302  209171 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:59:24.639309  209171 fix.go:54] fixHost starting: 
	I1216 02:59:24.639502  209171 cli_runner.go:164] Run: docker container inspect pause-837191 --format={{.State.Status}}
	I1216 02:59:24.658410  209171 fix.go:112] recreateIfNeeded on pause-837191: state=Running err=<nil>
	W1216 02:59:24.658467  209171 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:59:20.246289  208317 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 02:59:20.246585  208317 start.go:159] libmachine.API.Create for "cert-expiration-332150" (driver="docker")
	I1216 02:59:20.246613  208317 client.go:173] LocalClient.Create starting
	I1216 02:59:20.246674  208317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 02:59:20.246704  208317 main.go:143] libmachine: Decoding PEM data...
	I1216 02:59:20.246723  208317 main.go:143] libmachine: Parsing certificate...
	I1216 02:59:20.246785  208317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 02:59:20.246806  208317 main.go:143] libmachine: Decoding PEM data...
	I1216 02:59:20.246831  208317 main.go:143] libmachine: Parsing certificate...
	I1216 02:59:20.247248  208317 cli_runner.go:164] Run: docker network inspect cert-expiration-332150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 02:59:20.265518  208317 cli_runner.go:211] docker network inspect cert-expiration-332150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 02:59:20.265581  208317 network_create.go:284] running [docker network inspect cert-expiration-332150] to gather additional debugging logs...
	I1216 02:59:20.265595  208317 cli_runner.go:164] Run: docker network inspect cert-expiration-332150
	W1216 02:59:20.284294  208317 cli_runner.go:211] docker network inspect cert-expiration-332150 returned with exit code 1
	I1216 02:59:20.284340  208317 network_create.go:287] error running [docker network inspect cert-expiration-332150]: docker network inspect cert-expiration-332150: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-332150 not found
	I1216 02:59:20.284362  208317 network_create.go:289] output of [docker network inspect cert-expiration-332150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-332150 not found
	
	** /stderr **
	I1216 02:59:20.284462  208317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:59:20.303059  208317 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 02:59:20.303664  208317 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 02:59:20.304318  208317 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 02:59:20.305046  208317 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f8a06604d014 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:7c:57:95:1b:cd} reservation:<nil>}
	I1216 02:59:20.305790  208317 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-750eded50674 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:65:24:9a:40:63} reservation:<nil>}
	I1216 02:59:20.306778  208317 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e40dc0}
	I1216 02:59:20.306797  208317 network_create.go:124] attempt to create docker network cert-expiration-332150 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 02:59:20.306865  208317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-332150 cert-expiration-332150
	I1216 02:59:20.362568  208317 network_create.go:108] docker network cert-expiration-332150 192.168.94.0/24 created
	I1216 02:59:20.362590  208317 kic.go:121] calculated static IP "192.168.94.2" for the "cert-expiration-332150" container
	I1216 02:59:20.362681  208317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 02:59:20.386442  208317 cli_runner.go:164] Run: docker volume create cert-expiration-332150 --label name.minikube.sigs.k8s.io=cert-expiration-332150 --label created_by.minikube.sigs.k8s.io=true
	I1216 02:59:20.407197  208317 oci.go:103] Successfully created a docker volume cert-expiration-332150
	I1216 02:59:20.407282  208317 cli_runner.go:164] Run: docker run --rm --name cert-expiration-332150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-332150 --entrypoint /usr/bin/test -v cert-expiration-332150:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 02:59:20.824753  208317 oci.go:107] Successfully prepared a docker volume cert-expiration-332150
	I1216 02:59:20.824846  208317 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:59:20.824855  208317 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 02:59:20.824925  208317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-332150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 02:59:24.984017  208317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-332150:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.159042389s)
	I1216 02:59:24.984051  208317 kic.go:203] duration metric: took 4.15919296s to extract preloaded images to volume ...
	W1216 02:59:24.984135  208317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 02:59:24.984163  208317 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 02:59:24.984221  208317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 02:59:22.175497  204814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:22.675119  204814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:22.689080  204814 api_server.go:72] duration metric: took 1.014753849s to wait for apiserver process to appear ...
	I1216 02:59:22.689110  204814 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:59:22.689131  204814 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 02:59:24.731108  209171 out.go:252] * Updating the running docker "pause-837191" container ...
	I1216 02:59:24.731198  209171 machine.go:94] provisionDockerMachine start ...
	I1216 02:59:24.731309  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:24.749274  209171 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:24.749531  209171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1216 02:59:24.749546  209171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:59:24.887497  209171 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-837191
	
	I1216 02:59:24.887518  209171 ubuntu.go:182] provisioning hostname "pause-837191"
	I1216 02:59:24.887577  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:24.906499  209171 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:24.906775  209171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1216 02:59:24.906797  209171 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-837191 && echo "pause-837191" | sudo tee /etc/hostname
	I1216 02:59:25.064168  209171 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-837191
	
	I1216 02:59:25.064264  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:25.087797  209171 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:25.088124  209171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1216 02:59:25.088154  209171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-837191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-837191/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-837191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:59:25.245479  209171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:59:25.245506  209171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 02:59:25.245529  209171 ubuntu.go:190] setting up certificates
	I1216 02:59:25.245541  209171 provision.go:84] configureAuth start
	I1216 02:59:25.245593  209171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-837191
	I1216 02:59:25.269984  209171 provision.go:143] copyHostCerts
	I1216 02:59:25.270054  209171 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 02:59:25.270070  209171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 02:59:25.270156  209171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 02:59:25.270317  209171 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 02:59:25.270329  209171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 02:59:25.270393  209171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 02:59:25.270503  209171 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 02:59:25.270510  209171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 02:59:25.270551  209171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 02:59:25.270643  209171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.pause-837191 san=[127.0.0.1 192.168.76.2 localhost minikube pause-837191]
	I1216 02:59:25.391120  209171 provision.go:177] copyRemoteCerts
	I1216 02:59:25.391197  209171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:59:25.391240  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:25.416244  209171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:25.527937  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 02:59:25.550902  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 02:59:25.579047  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:59:25.597464  209171 provision.go:87] duration metric: took 351.901799ms to configureAuth
	I1216 02:59:25.597492  209171 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:59:25.597723  209171 config.go:182] Loaded profile config "pause-837191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:25.597862  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:25.619830  209171 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:25.620064  209171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1216 02:59:25.620089  209171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 02:59:26.015221  209171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 02:59:26.015244  209171 machine.go:97] duration metric: took 1.284032679s to provisionDockerMachine
	I1216 02:59:26.015257  209171 start.go:293] postStartSetup for "pause-837191" (driver="docker")
	I1216 02:59:26.015270  209171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:59:26.015346  209171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:59:26.015404  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:26.035752  209171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:26.142980  209171 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:59:26.147717  209171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:59:26.147750  209171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:59:26.147763  209171 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 02:59:26.147842  209171 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 02:59:26.147941  209171 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 02:59:26.148067  209171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 02:59:26.157115  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 02:59:26.178813  209171 start.go:296] duration metric: took 163.538177ms for postStartSetup
	I1216 02:59:26.178914  209171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:59:26.178961  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:26.201406  209171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:26.301413  209171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:59:26.307786  209171 fix.go:56] duration metric: took 1.668470954s for fixHost
	I1216 02:59:26.307838  209171 start.go:83] releasing machines lock for "pause-837191", held for 1.668545774s
	I1216 02:59:26.307902  209171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-837191
	I1216 02:59:26.328983  209171 ssh_runner.go:195] Run: cat /version.json
	I1216 02:59:26.329039  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:26.329085  209171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:59:26.329161  209171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-837191
	I1216 02:59:26.348792  209171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:26.352628  209171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/pause-837191/id_rsa Username:docker}
	I1216 02:59:26.447423  209171 ssh_runner.go:195] Run: systemctl --version
	I1216 02:59:26.513897  209171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 02:59:26.556758  209171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:59:26.561646  209171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:59:26.561710  209171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:59:26.570329  209171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:59:26.570354  209171 start.go:496] detecting cgroup driver to use...
	I1216 02:59:26.570384  209171 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 02:59:26.570435  209171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 02:59:26.586737  209171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 02:59:26.600139  209171 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:59:26.600198  209171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:59:26.618390  209171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:59:26.635388  209171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:59:26.814162  209171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:59:26.971561  209171 docker.go:234] disabling docker service ...
	I1216 02:59:26.971628  209171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:59:26.992399  209171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:59:27.009540  209171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:59:27.129579  209171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:59:27.240122  209171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:59:27.253475  209171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:59:27.267694  209171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 02:59:27.267766  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.276778  209171 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 02:59:27.276934  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.286100  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.295158  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.304228  209171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:59:27.311970  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.320240  209171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.329066  209171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:27.338044  209171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:59:27.345218  209171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:59:27.352163  209171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:27.458706  209171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 02:59:27.642504  209171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 02:59:27.642574  209171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 02:59:27.646785  209171 start.go:564] Will wait 60s for crictl version
	I1216 02:59:27.646854  209171 ssh_runner.go:195] Run: which crictl
	I1216 02:59:27.650477  209171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:59:27.673511  209171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 02:59:27.673586  209171 ssh_runner.go:195] Run: crio --version
	I1216 02:59:27.700536  209171 ssh_runner.go:195] Run: crio --version
	I1216 02:59:27.730460  209171 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 02:59:27.731813  209171 cli_runner.go:164] Run: docker network inspect pause-837191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:59:27.749442  209171 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 02:59:27.753627  209171 kubeadm.go:884] updating cluster {Name:pause-837191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-837191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:59:27.753758  209171 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:59:27.753797  209171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:59:27.785055  209171 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:59:27.785073  209171 crio.go:433] Images already preloaded, skipping extraction
	I1216 02:59:27.785113  209171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:59:27.811771  209171 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:59:27.811795  209171 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:59:27.811802  209171 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 02:59:27.811922  209171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-837191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-837191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:59:27.811987  209171 ssh_runner.go:195] Run: crio config
	I1216 02:59:27.856493  209171 cni.go:84] Creating CNI manager for ""
	I1216 02:59:27.856520  209171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:59:27.856538  209171 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:59:27.856569  209171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-837191 NodeName:pause-837191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:59:27.856785  209171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-837191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:59:27.856880  209171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 02:59:27.866792  209171 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:59:27.866886  209171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:59:27.876332  209171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 02:59:27.890965  209171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 02:59:27.906298  209171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1216 02:59:27.921759  209171 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:59:27.926247  209171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:28.057901  209171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:59:28.073606  209171 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191 for IP: 192.168.76.2
	I1216 02:59:28.073635  209171 certs.go:195] generating shared ca certs ...
	I1216 02:59:28.073660  209171 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:28.073870  209171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 02:59:28.073940  209171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 02:59:28.073959  209171 certs.go:257] generating profile certs ...
	I1216 02:59:28.074075  209171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/client.key
	I1216 02:59:28.074166  209171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/apiserver.key.5baf7143
	I1216 02:59:28.074226  209171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/proxy-client.key
	I1216 02:59:28.074393  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 02:59:28.074443  209171 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 02:59:28.074460  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 02:59:28.074505  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 02:59:28.074556  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:59:28.074597  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 02:59:28.074672  209171 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 02:59:28.075495  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:59:28.096777  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 02:59:28.116880  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:59:28.138627  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:59:28.159865  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 02:59:28.180129  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:59:28.202353  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:59:28.223416  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 02:59:26.746624  203984 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.699307647s
	I1216 02:59:26.936989  203984 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.889912799s
	I1216 02:59:28.548587  203984 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50201802s
	I1216 02:59:28.568465  203984 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 02:59:28.582784  203984 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 02:59:28.592227  203984 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 02:59:28.592506  203984 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-546137 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 02:59:28.599944  203984 kubeadm.go:319] [bootstrap-token] Using token: a3apbq.n2fz8lf0felp96af
	I1216 02:59:28.243790  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 02:59:28.264287  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:59:28.284465  209171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 02:59:28.305309  209171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:59:28.319095  209171 ssh_runner.go:195] Run: openssl version
	I1216 02:59:28.326367  209171 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 02:59:28.334927  209171 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 02:59:28.344629  209171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 02:59:28.349101  209171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 02:59:28.349162  209171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 02:59:28.393253  209171 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:59:28.401551  209171 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:28.408944  209171 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:59:28.416106  209171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:28.419676  209171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:28.419724  209171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:28.455018  209171 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:59:28.463580  209171 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 02:59:28.472354  209171 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 02:59:28.480751  209171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 02:59:28.484840  209171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 02:59:28.484893  209171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 02:59:28.520342  209171 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:59:28.528027  209171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:59:28.532297  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:59:28.572675  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:59:28.618673  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:59:28.654140  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:59:28.690121  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:59:28.723938  209171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:59:28.759373  209171 kubeadm.go:401] StartCluster: {Name:pause-837191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-837191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:59:28.759503  209171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:59:28.759555  209171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:59:28.787282  209171 cri.go:89] found id: "bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76"
	I1216 02:59:28.787300  209171 cri.go:89] found id: "b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2"
	I1216 02:59:28.787304  209171 cri.go:89] found id: "88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38"
	I1216 02:59:28.787307  209171 cri.go:89] found id: "5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1"
	I1216 02:59:28.787309  209171 cri.go:89] found id: "1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788"
	I1216 02:59:28.787313  209171 cri.go:89] found id: "b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5"
	I1216 02:59:28.787315  209171 cri.go:89] found id: "8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa"
	I1216 02:59:28.787317  209171 cri.go:89] found id: ""
	I1216 02:59:28.787356  209171 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 02:59:28.798554  209171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:59:28Z" level=error msg="open /run/runc: no such file or directory"
	I1216 02:59:28.798630  209171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:59:28.806258  209171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:59:28.806275  209171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:59:28.806309  209171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:59:28.813434  209171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:59:28.813982  209171 kubeconfig.go:125] found "pause-837191" server: "https://192.168.76.2:8443"
	I1216 02:59:28.814588  209171 kapi.go:59] client config for pause-837191: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:59:28.815039  209171 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:59:28.815057  209171 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:59:28.815062  209171 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:59:28.815067  209171 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:59:28.815073  209171 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:59:28.815406  209171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:59:28.824092  209171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 02:59:28.824124  209171 kubeadm.go:602] duration metric: took 17.843538ms to restartPrimaryControlPlane
	I1216 02:59:28.824133  209171 kubeadm.go:403] duration metric: took 64.766945ms to StartCluster
	I1216 02:59:28.824150  209171 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:28.824212  209171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:59:28.824961  209171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:28.825167  209171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:59:28.825227  209171 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:59:28.825346  209171 config.go:182] Loaded profile config "pause-837191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:28.826761  209171 out.go:179] * Verifying Kubernetes components...
	I1216 02:59:28.826768  209171 out.go:179] * Enabled addons: 
	I1216 02:59:28.601680  203984 out.go:252]   - Configuring RBAC rules ...
	I1216 02:59:28.601857  203984 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 02:59:28.604606  203984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 02:59:28.610482  203984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 02:59:28.612657  203984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 02:59:28.614978  203984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 02:59:28.617150  203984 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 02:59:28.954613  203984 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 02:59:29.372090  203984 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 02:59:29.954858  203984 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 02:59:29.955939  203984 kubeadm.go:319] 
	I1216 02:59:29.956029  203984 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 02:59:29.956039  203984 kubeadm.go:319] 
	I1216 02:59:29.956109  203984 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 02:59:29.956124  203984 kubeadm.go:319] 
	I1216 02:59:29.956175  203984 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 02:59:29.956277  203984 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 02:59:29.956352  203984 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 02:59:29.956363  203984 kubeadm.go:319] 
	I1216 02:59:29.956447  203984 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 02:59:29.956456  203984 kubeadm.go:319] 
	I1216 02:59:29.956495  203984 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 02:59:29.956501  203984 kubeadm.go:319] 
	I1216 02:59:29.956599  203984 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 02:59:29.956732  203984 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 02:59:29.956895  203984 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 02:59:29.956906  203984 kubeadm.go:319] 
	I1216 02:59:29.957037  203984 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 02:59:29.957157  203984 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 02:59:29.957172  203984 kubeadm.go:319] 
	I1216 02:59:29.957314  203984 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a3apbq.n2fz8lf0felp96af \
	I1216 02:59:29.957458  203984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 02:59:29.957495  203984 kubeadm.go:319] 	--control-plane 
	I1216 02:59:29.957504  203984 kubeadm.go:319] 
	I1216 02:59:29.957622  203984 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 02:59:29.957636  203984 kubeadm.go:319] 
	I1216 02:59:29.957766  203984 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a3apbq.n2fz8lf0felp96af \
	I1216 02:59:29.957906  203984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 02:59:29.960425  203984 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 02:59:29.960528  203984 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:59:29.960551  203984 cni.go:84] Creating CNI manager for ""
	I1216 02:59:29.960559  203984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:59:29.962698  203984 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 02:59:25.053161  208317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-332150 --name cert-expiration-332150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-332150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-332150 --network cert-expiration-332150 --ip 192.168.94.2 --volume cert-expiration-332150:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 02:59:25.385120  208317 cli_runner.go:164] Run: docker container inspect cert-expiration-332150 --format={{.State.Running}}
	I1216 02:59:25.409024  208317 cli_runner.go:164] Run: docker container inspect cert-expiration-332150 --format={{.State.Status}}
	I1216 02:59:25.437162  208317 cli_runner.go:164] Run: docker exec cert-expiration-332150 stat /var/lib/dpkg/alternatives/iptables
	I1216 02:59:25.492855  208317 oci.go:144] the created container "cert-expiration-332150" has a running status.
	I1216 02:59:25.492879  208317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa...
	I1216 02:59:25.784123  208317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 02:59:25.818551  208317 cli_runner.go:164] Run: docker container inspect cert-expiration-332150 --format={{.State.Status}}
	I1216 02:59:25.842511  208317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 02:59:25.842525  208317 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-332150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 02:59:25.893136  208317 cli_runner.go:164] Run: docker container inspect cert-expiration-332150 --format={{.State.Status}}
	I1216 02:59:25.916514  208317 machine.go:94] provisionDockerMachine start ...
	I1216 02:59:25.916627  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:25.938764  208317 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:25.939121  208317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1216 02:59:25.939130  208317 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:59:25.939839  208317 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59656->127.0.0.1:33008: read: connection reset by peer
	I1216 02:59:29.092779  208317 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-332150
	
	I1216 02:59:29.092845  208317 ubuntu.go:182] provisioning hostname "cert-expiration-332150"
	I1216 02:59:29.092905  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:29.113523  208317 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:29.113882  208317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1216 02:59:29.113894  208317 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-332150 && echo "cert-expiration-332150" | sudo tee /etc/hostname
	I1216 02:59:29.272273  208317 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-332150
	
	I1216 02:59:29.272346  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:29.290887  208317 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:29.291143  208317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1216 02:59:29.291155  208317 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-332150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-332150/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-332150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:59:29.432020  208317 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:59:29.432053  208317 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 02:59:29.432075  208317 ubuntu.go:190] setting up certificates
	I1216 02:59:29.432096  208317 provision.go:84] configureAuth start
	I1216 02:59:29.432166  208317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-332150
	I1216 02:59:29.449950  208317 provision.go:143] copyHostCerts
	I1216 02:59:29.450003  208317 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 02:59:29.450009  208317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 02:59:29.450083  208317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 02:59:29.450208  208317 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 02:59:29.450214  208317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 02:59:29.450255  208317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 02:59:29.450324  208317 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 02:59:29.450336  208317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 02:59:29.450371  208317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 02:59:29.450434  208317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-332150 san=[127.0.0.1 192.168.94.2 cert-expiration-332150 localhost minikube]
	I1216 02:59:29.496095  208317 provision.go:177] copyRemoteCerts
	I1216 02:59:29.496145  208317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:59:29.496176  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:29.514119  208317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa Username:docker}
	I1216 02:59:29.612006  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 02:59:29.642933  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 02:59:29.660369  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:59:29.679402  208317 provision.go:87] duration metric: took 247.286644ms to configureAuth
	I1216 02:59:29.679421  208317 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:59:29.679613  208317 config.go:182] Loaded profile config "cert-expiration-332150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:29.679732  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:29.697486  208317 main.go:143] libmachine: Using SSH client type: native
	I1216 02:59:29.697748  208317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1216 02:59:29.697766  208317 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 02:59:29.972439  208317 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 02:59:29.972451  208317 machine.go:97] duration metric: took 4.055907887s to provisionDockerMachine
	I1216 02:59:29.972459  208317 client.go:176] duration metric: took 9.725842486s to LocalClient.Create
	I1216 02:59:29.972474  208317 start.go:167] duration metric: took 9.725891586s to libmachine.API.Create "cert-expiration-332150"
	I1216 02:59:29.972480  208317 start.go:293] postStartSetup for "cert-expiration-332150" (driver="docker")
	I1216 02:59:29.972487  208317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:59:29.972541  208317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:59:29.972613  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:29.992119  208317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa Username:docker}
	I1216 02:59:29.964286  203984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 02:59:29.968790  203984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 02:59:29.968806  203984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 02:59:29.983060  203984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 02:59:30.193498  203984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 02:59:30.193587  203984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:59:30.193629  203984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-546137 minikube.k8s.io/updated_at=2025_12_16T02_59_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=force-systemd-flag-546137 minikube.k8s.io/primary=true
	I1216 02:59:30.203685  203984 ops.go:34] apiserver oom_adj: -16
	I1216 02:59:30.271810  203984 kubeadm.go:1114] duration metric: took 78.277547ms to wait for elevateKubeSystemPrivileges
	I1216 02:59:30.283865  203984 kubeadm.go:403] duration metric: took 13.156589251s to StartCluster
	I1216 02:59:30.283909  203984 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:30.283995  203984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:59:30.285307  203984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:30.285558  203984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 02:59:30.285576  203984 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:59:30.285640  203984 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:59:30.285727  203984 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-flag-546137"
	I1216 02:59:30.285739  203984 addons.go:70] Setting default-storageclass=true in profile "force-systemd-flag-546137"
	I1216 02:59:30.285747  203984 config.go:182] Loaded profile config "force-systemd-flag-546137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:30.285750  203984 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-flag-546137"
	I1216 02:59:30.285757  203984 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-546137"
	I1216 02:59:30.285784  203984 host.go:66] Checking if "force-systemd-flag-546137" exists ...
	I1216 02:59:30.286175  203984 cli_runner.go:164] Run: docker container inspect force-systemd-flag-546137 --format={{.State.Status}}
	I1216 02:59:30.286345  203984 cli_runner.go:164] Run: docker container inspect force-systemd-flag-546137 --format={{.State.Status}}
	I1216 02:59:30.288077  203984 out.go:179] * Verifying Kubernetes components...
	I1216 02:59:30.289247  203984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:30.310957  203984 kapi.go:59] client config for force-systemd-flag-546137: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:59:30.311605  203984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:59:30.311623  203984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:59:30.311632  203984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:59:30.311639  203984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:59:30.311645  203984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:59:30.312003  203984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:59:30.312295  203984 addons.go:239] Setting addon default-storageclass=true in "force-systemd-flag-546137"
	I1216 02:59:30.312367  203984 host.go:66] Checking if "force-systemd-flag-546137" exists ...
	I1216 02:59:30.312569  203984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:59:28.827795  209171 addons.go:530] duration metric: took 2.576647ms for enable addons: enabled=[]
	I1216 02:59:28.827810  209171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:28.937777  209171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:59:28.955148  209171 node_ready.go:35] waiting up to 6m0s for node "pause-837191" to be "Ready" ...
	I1216 02:59:28.964918  209171 node_ready.go:49] node "pause-837191" is "Ready"
	I1216 02:59:28.964948  209171 node_ready.go:38] duration metric: took 9.75917ms for node "pause-837191" to be "Ready" ...
	I1216 02:59:28.964963  209171 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:59:28.965020  209171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:28.979839  209171 api_server.go:72] duration metric: took 154.644964ms to wait for apiserver process to appear ...
	I1216 02:59:28.979865  209171 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:59:28.979887  209171 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 02:59:28.985934  209171 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 02:59:28.986973  209171 api_server.go:141] control plane version: v1.34.2
	I1216 02:59:28.987002  209171 api_server.go:131] duration metric: took 7.129968ms to wait for apiserver health ...
	I1216 02:59:28.987014  209171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 02:59:28.990378  209171 system_pods.go:59] 7 kube-system pods found
	I1216 02:59:28.990409  209171 system_pods.go:61] "coredns-66bc5c9577-pjnck" [3ce58deb-ddcb-4423-84e1-fa3a3fd0417c] Running
	I1216 02:59:28.990417  209171 system_pods.go:61] "etcd-pause-837191" [f91a3257-3e5f-4e19-b2e1-e9513801cebf] Running
	I1216 02:59:28.990423  209171 system_pods.go:61] "kindnet-wcl5f" [4e91fbf8-3a12-4de8-a517-2b92db440ff1] Running
	I1216 02:59:28.990429  209171 system_pods.go:61] "kube-apiserver-pause-837191" [b47de40d-4dab-45d7-b2ca-437e326f93d5] Running
	I1216 02:59:28.990434  209171 system_pods.go:61] "kube-controller-manager-pause-837191" [a5d5c3af-d18c-480e-8ccd-eb8636dcf33c] Running
	I1216 02:59:28.990441  209171 system_pods.go:61] "kube-proxy-fmvd7" [4bf1decc-e3b6-4a2c-bbf0-a652a9508a51] Running
	I1216 02:59:28.990446  209171 system_pods.go:61] "kube-scheduler-pause-837191" [0a5b390b-7544-4e50-8dcf-550f14bcfb7c] Running
	I1216 02:59:28.990453  209171 system_pods.go:74] duration metric: took 3.432273ms to wait for pod list to return data ...
	I1216 02:59:28.990464  209171 default_sa.go:34] waiting for default service account to be created ...
	I1216 02:59:28.993915  209171 default_sa.go:45] found service account: "default"
	I1216 02:59:28.993939  209171 default_sa.go:55] duration metric: took 3.468023ms for default service account to be created ...
	I1216 02:59:28.993950  209171 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 02:59:28.998174  209171 system_pods.go:86] 7 kube-system pods found
	I1216 02:59:28.998199  209171 system_pods.go:89] "coredns-66bc5c9577-pjnck" [3ce58deb-ddcb-4423-84e1-fa3a3fd0417c] Running
	I1216 02:59:28.998207  209171 system_pods.go:89] "etcd-pause-837191" [f91a3257-3e5f-4e19-b2e1-e9513801cebf] Running
	I1216 02:59:28.998213  209171 system_pods.go:89] "kindnet-wcl5f" [4e91fbf8-3a12-4de8-a517-2b92db440ff1] Running
	I1216 02:59:28.998218  209171 system_pods.go:89] "kube-apiserver-pause-837191" [b47de40d-4dab-45d7-b2ca-437e326f93d5] Running
	I1216 02:59:28.998224  209171 system_pods.go:89] "kube-controller-manager-pause-837191" [a5d5c3af-d18c-480e-8ccd-eb8636dcf33c] Running
	I1216 02:59:28.998229  209171 system_pods.go:89] "kube-proxy-fmvd7" [4bf1decc-e3b6-4a2c-bbf0-a652a9508a51] Running
	I1216 02:59:28.998234  209171 system_pods.go:89] "kube-scheduler-pause-837191" [0a5b390b-7544-4e50-8dcf-550f14bcfb7c] Running
	I1216 02:59:28.998243  209171 system_pods.go:126] duration metric: took 4.286432ms to wait for k8s-apps to be running ...
	I1216 02:59:28.998252  209171 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 02:59:28.998300  209171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:59:29.015466  209171 system_svc.go:56] duration metric: took 17.206879ms WaitForService to wait for kubelet
	I1216 02:59:29.015496  209171 kubeadm.go:587] duration metric: took 190.307031ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:59:29.015518  209171 node_conditions.go:102] verifying NodePressure condition ...
	I1216 02:59:29.018232  209171 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 02:59:29.018256  209171 node_conditions.go:123] node cpu capacity is 8
	I1216 02:59:29.018269  209171 node_conditions.go:105] duration metric: took 2.745868ms to run NodePressure ...
	I1216 02:59:29.018280  209171 start.go:242] waiting for startup goroutines ...
	I1216 02:59:29.018288  209171 start.go:247] waiting for cluster config update ...
	I1216 02:59:29.018297  209171 start.go:256] writing updated cluster config ...
	I1216 02:59:29.018592  209171 ssh_runner.go:195] Run: rm -f paused
	I1216 02:59:29.022585  209171 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:59:29.023191  209171 kapi.go:59] client config for pause-837191: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/pause-837191/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:59:29.026046  209171 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjnck" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.030784  209171 pod_ready.go:94] pod "coredns-66bc5c9577-pjnck" is "Ready"
	I1216 02:59:29.030813  209171 pod_ready.go:86] duration metric: took 4.743587ms for pod "coredns-66bc5c9577-pjnck" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.033128  209171 pod_ready.go:83] waiting for pod "etcd-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.037042  209171 pod_ready.go:94] pod "etcd-pause-837191" is "Ready"
	I1216 02:59:29.037069  209171 pod_ready.go:86] duration metric: took 3.917829ms for pod "etcd-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.039252  209171 pod_ready.go:83] waiting for pod "kube-apiserver-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.042761  209171 pod_ready.go:94] pod "kube-apiserver-pause-837191" is "Ready"
	I1216 02:59:29.042776  209171 pod_ready.go:86] duration metric: took 3.502579ms for pod "kube-apiserver-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.044556  209171 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.426523  209171 pod_ready.go:94] pod "kube-controller-manager-pause-837191" is "Ready"
	I1216 02:59:29.426548  209171 pod_ready.go:86] duration metric: took 381.970684ms for pod "kube-controller-manager-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:29.629534  209171 pod_ready.go:83] waiting for pod "kube-proxy-fmvd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:30.026327  209171 pod_ready.go:94] pod "kube-proxy-fmvd7" is "Ready"
	I1216 02:59:30.026351  209171 pod_ready.go:86] duration metric: took 396.769838ms for pod "kube-proxy-fmvd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:30.226405  209171 pod_ready.go:83] waiting for pod "kube-scheduler-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:30.626941  209171 pod_ready.go:94] pod "kube-scheduler-pause-837191" is "Ready"
	I1216 02:59:30.626975  209171 pod_ready.go:86] duration metric: took 400.534541ms for pod "kube-scheduler-pause-837191" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:59:30.626991  209171 pod_ready.go:40] duration metric: took 1.604357375s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:59:30.679248  209171 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 02:59:30.680928  209171 out.go:179] * Done! kubectl is now configured to use "pause-837191" cluster and "default" namespace by default
	I1216 02:59:30.313111  203984 cli_runner.go:164] Run: docker container inspect force-systemd-flag-546137 --format={{.State.Status}}
	I1216 02:59:30.314891  203984 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:59:30.315966  203984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:59:30.316033  203984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-546137
	I1216 02:59:30.349543  203984 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:59:30.349572  203984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:59:30.349635  203984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-546137
	I1216 02:59:30.350322  203984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/force-systemd-flag-546137/id_rsa Username:docker}
	I1216 02:59:30.372438  203984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/force-systemd-flag-546137/id_rsa Username:docker}
	I1216 02:59:30.388083  203984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 02:59:30.447081  203984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:59:30.461768  203984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:59:30.484722  203984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:59:30.557141  203984 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1216 02:59:30.558173  203984 kapi.go:59] client config for force-systemd-flag-546137: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:59:30.558201  203984 kapi.go:59] client config for force-systemd-flag-546137: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/force-systemd-flag-546137/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:59:30.558568  203984 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:59:30.558623  203984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:30.767800  203984 api_server.go:72] duration metric: took 482.18892ms to wait for apiserver process to appear ...
	I1216 02:59:30.767857  203984 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:59:30.767881  203984 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 02:59:30.773343  203984 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 02:59:30.774281  203984 api_server.go:141] control plane version: v1.34.2
	I1216 02:59:30.774304  203984 api_server.go:131] duration metric: took 6.439575ms to wait for apiserver health ...
	I1216 02:59:30.774313  203984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 02:59:30.777208  203984 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 02:59:30.777547  203984 system_pods.go:59] 5 kube-system pods found
	I1216 02:59:30.777575  203984 system_pods.go:61] "etcd-force-systemd-flag-546137" [03641f0e-54b0-46ab-9396-e2e0a6ae28ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 02:59:30.777583  203984 system_pods.go:61] "kube-apiserver-force-systemd-flag-546137" [ba3afe03-f9b5-4d07-bac8-102667b82482] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 02:59:30.777592  203984 system_pods.go:61] "kube-controller-manager-force-systemd-flag-546137" [f85080af-5293-43e3-a36b-b621a2132e73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 02:59:30.777598  203984 system_pods.go:61] "kube-scheduler-force-systemd-flag-546137" [1acd6600-be0a-4a62-bbad-dd13372a597d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 02:59:30.777602  203984 system_pods.go:61] "storage-provisioner" [279c88c9-5373-47c2-8787-991cd6f32df0] Pending
	I1216 02:59:30.777607  203984 system_pods.go:74] duration metric: took 3.290252ms to wait for pod list to return data ...
	I1216 02:59:30.777616  203984 kubeadm.go:587] duration metric: took 492.01079ms to wait for: map[apiserver:true system_pods:true]
	I1216 02:59:30.777631  203984 node_conditions.go:102] verifying NodePressure condition ...
	I1216 02:59:30.778327  203984 addons.go:530] duration metric: took 492.683197ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 02:59:30.780208  203984 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 02:59:30.780234  203984 node_conditions.go:123] node cpu capacity is 8
	I1216 02:59:30.780249  203984 node_conditions.go:105] duration metric: took 2.613112ms to run NodePressure ...
	I1216 02:59:30.780262  203984 start.go:242] waiting for startup goroutines ...
	I1216 02:59:31.061797  203984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-546137" context rescaled to 1 replicas
	I1216 02:59:31.061935  203984 start.go:247] waiting for cluster config update ...
	I1216 02:59:31.061958  203984 start.go:256] writing updated cluster config ...
	I1216 02:59:31.062295  203984 ssh_runner.go:195] Run: rm -f paused
	I1216 02:59:31.117041  203984 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 02:59:31.118966  203984 out.go:179] * Done! kubectl is now configured to use "force-systemd-flag-546137" cluster and "default" namespace by default
	I1216 02:59:30.094065  208317 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:59:30.097803  208317 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:59:30.097834  208317 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:59:30.097844  208317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 02:59:30.097921  208317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 02:59:30.098012  208317 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 02:59:30.098115  208317 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 02:59:30.106111  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 02:59:30.128059  208317 start.go:296] duration metric: took 155.563897ms for postStartSetup
	I1216 02:59:30.128399  208317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-332150
	I1216 02:59:30.148631  208317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/config.json ...
	I1216 02:59:30.148898  208317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:59:30.148935  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:30.168487  208317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa Username:docker}
	I1216 02:59:30.268840  208317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:59:30.274930  208317 start.go:128] duration metric: took 10.030646181s to createHost
	I1216 02:59:30.274946  208317 start.go:83] releasing machines lock for "cert-expiration-332150", held for 10.030781096s
	I1216 02:59:30.275022  208317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-332150
	I1216 02:59:30.298410  208317 ssh_runner.go:195] Run: cat /version.json
	I1216 02:59:30.298469  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:30.298501  208317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:59:30.298565  208317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-332150
	I1216 02:59:30.323978  208317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa Username:docker}
	I1216 02:59:30.324230  208317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/cert-expiration-332150/id_rsa Username:docker}
	I1216 02:59:30.503988  208317 ssh_runner.go:195] Run: systemctl --version
	I1216 02:59:30.511737  208317 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 02:59:30.555132  208317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:59:30.561584  208317 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:59:30.561634  208317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:59:30.595001  208317 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 02:59:30.595017  208317 start.go:496] detecting cgroup driver to use...
	I1216 02:59:30.595050  208317 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 02:59:30.595101  208317 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 02:59:30.612811  208317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 02:59:30.626183  208317 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:59:30.626232  208317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:59:30.644254  208317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:59:30.664771  208317 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:59:30.777431  208317 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:59:30.888190  208317 docker.go:234] disabling docker service ...
	I1216 02:59:30.888251  208317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:59:30.908002  208317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:59:30.922129  208317 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:59:31.006709  208317 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:59:31.102073  208317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:59:31.116225  208317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:59:31.131067  208317 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 02:59:31.131120  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.141979  208317 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 02:59:31.142112  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.156017  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.166736  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.178143  208317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:59:31.187770  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.198018  208317 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.216448  208317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:59:31.227295  208317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:59:31.237291  208317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:59:31.248028  208317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:31.342616  208317 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 02:59:31.480648  208317 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 02:59:31.480721  208317 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 02:59:31.484867  208317 start.go:564] Will wait 60s for crictl version
	I1216 02:59:31.484982  208317 ssh_runner.go:195] Run: which crictl
	I1216 02:59:31.490562  208317 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:59:31.526278  208317 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 02:59:31.526341  208317 ssh_runner.go:195] Run: crio --version
	I1216 02:59:31.558964  208317 ssh_runner.go:195] Run: crio --version
	I1216 02:59:31.595001  208317 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 02:59:27.689701  204814 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 02:59:27.689741  204814 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.552446296Z" level=info msg="RDT not available in the host system"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.552457699Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553317303Z" level=info msg="Conmon does support the --sync option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553401894Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553434354Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.554181162Z" level=info msg="Conmon does support the --sync option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.554197931Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55812229Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.5581581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55868454Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"
/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri
]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.559052674Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55911071Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.637781517Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-pjnck Namespace:kube-system ID:7ca09997d61fb843ffb635a880e2b79da5d3971af9ca71f3a3329e3d3657cc1a UID:3ce58deb-ddcb-4423-84e1-fa3a3fd0417c NetNS:/var/run/netns/64054fce-d6a7-4c40-abf6-fa0cbe5b0333 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006a0300}] Aliases:map[]}"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638013832Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-pjnck for CNI network kindnet (type=ptp)"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638482181Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638505956Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638557186Z" level=info msg="Create NRI interface"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638675408Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638691461Z" level=info msg="runtime interface created"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638701258Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638706977Z" level=info msg="runtime interface starting up..."
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638712628Z" level=info msg="starting plugins..."
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638724811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.639030881Z" level=info msg="No systemd watchdog enabled"
	Dec 16 02:59:27 pause-837191 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bf2d431585316       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   7ca09997d61fb       coredns-66bc5c9577-pjnck               kube-system
	b844e62002d16       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   ad13d00f0e700       kindnet-wcl5f                          kube-system
	88215e5886950       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   25 seconds ago      Running             kube-proxy                0                   b2da3e312f4be       kube-proxy-fmvd7                       kube-system
	5b9d7480573c6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   3284bd3a120f2       etcd-pause-837191                      kube-system
	1d892921aadff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   6cef185e99a30       kube-scheduler-pause-837191            kube-system
	b3a30f62410fc       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   8fa19cc77a23a       kube-controller-manager-pause-837191   kube-system
	8611286a63a13       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   d6b64b96ce57f       kube-apiserver-pause-837191            kube-system
	
	
	==> coredns [bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40251 - 9667 "HINFO IN 8510780704647402415.8346462452358273346. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019912735s
	
	
	==> describe nodes <==
	Name:               pause-837191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-837191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-837191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T02_59_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 02:59:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-837191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 02:59:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-837191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                2e52ea69-2427-4b82-be68-cbc8774b0719
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pjnck                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-837191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-wcl5f                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-837191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-837191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-fmvd7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-837191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-837191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-837191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-837191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-837191 event: Registered Node pause-837191 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-837191 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1] <==
	{"level":"warn","ts":"2025-12-16T02:59:00.007364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.020078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.032810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.046032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.066521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.078720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.090479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.102151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.114541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.126475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.138321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.147007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.157639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.167846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.180997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.192552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.208013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.212611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.221691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.230254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.291042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42064","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T02:59:24.246079Z","caller":"traceutil/trace.go:172","msg":"trace[195916779] linearizableReadLoop","detail":"{readStateIndex:416; appliedIndex:416; }","duration":"131.782438ms","start":"2025-12-16T02:59:24.114274Z","end":"2025-12-16T02:59:24.246057Z","steps":["trace[195916779] 'read index received'  (duration: 131.772263ms)","trace[195916779] 'applied index is now lower than readState.Index'  (duration: 8.47µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T02:59:24.246214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.911256ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:59:24.246262Z","caller":"traceutil/trace.go:172","msg":"trace[596979243] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"231.146257ms","start":"2025-12-16T02:59:24.015100Z","end":"2025-12-16T02:59:24.246246Z","steps":["trace[596979243] 'process raft request'  (duration: 231.039068ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:59:24.246287Z","caller":"traceutil/trace.go:172","msg":"trace[1115945735] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:402; }","duration":"132.008762ms","start":"2025-12-16T02:59:24.114269Z","end":"2025-12-16T02:59:24.246278Z","steps":["trace[1115945735] 'agreement among raft nodes before linearized reading'  (duration: 131.866429ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:59:34 up 42 min,  0 user,  load average: 4.83, 2.07, 1.34
	Linux pause-837191 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2] <==
	I1216 02:59:09.504441       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 02:59:09.504695       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 02:59:09.504884       1 main.go:148] setting mtu 1500 for CNI 
	I1216 02:59:09.504902       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 02:59:09.504939       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T02:59:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 02:59:09.707874       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 02:59:09.707897       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 02:59:09.707920       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 02:59:09.708087       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 02:59:10.102246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 02:59:10.102279       1 metrics.go:72] Registering metrics
	I1216 02:59:10.102746       1 controller.go:711] "Syncing nftables rules"
	I1216 02:59:19.707925       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 02:59:19.707986       1 main.go:301] handling current node
	I1216 02:59:29.711513       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 02:59:29.711571       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa] <==
	I1216 02:59:01.025967       1 policy_source.go:240] refreshing policies
	E1216 02:59:01.028323       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1216 02:59:01.079568       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 02:59:01.087871       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:01.088502       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 02:59:01.096446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:01.098454       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 02:59:01.227509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 02:59:01.874668       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 02:59:01.878849       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 02:59:01.879237       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 02:59:02.379636       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 02:59:02.413683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 02:59:02.489873       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 02:59:02.497163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1216 02:59:02.498430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 02:59:02.502792       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 02:59:02.904669       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 02:59:03.359782       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 02:59:03.388506       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 02:59:03.418935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 02:59:08.355232       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 02:59:08.806600       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:08.810353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:08.854323       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5] <==
	I1216 02:59:07.860107       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-837191" podCIDRs=["10.244.0.0/24"]
	I1216 02:59:07.875830       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 02:59:07.877258       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 02:59:07.902013       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 02:59:07.902034       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 02:59:07.902054       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 02:59:07.902067       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 02:59:07.903220       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 02:59:07.903249       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 02:59:07.903270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 02:59:07.903275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 02:59:07.903306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 02:59:07.903341       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 02:59:07.903371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 02:59:07.903382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 02:59:07.903375       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 02:59:07.903462       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 02:59:07.903543       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 02:59:07.907153       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:59:07.907238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 02:59:07.908335       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 02:59:07.911851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:59:07.918196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 02:59:07.929455       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 02:59:22.855551       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38] <==
	I1216 02:59:09.271923       1 server_linux.go:53] "Using iptables proxy"
	I1216 02:59:09.328449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 02:59:09.429261       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 02:59:09.429299       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 02:59:09.429383       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 02:59:09.448736       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 02:59:09.448798       1 server_linux.go:132] "Using iptables Proxier"
	I1216 02:59:09.453995       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 02:59:09.454385       1 server.go:527] "Version info" version="v1.34.2"
	I1216 02:59:09.454449       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 02:59:09.458660       1 config.go:106] "Starting endpoint slice config controller"
	I1216 02:59:09.459240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 02:59:09.458678       1 config.go:200] "Starting service config controller"
	I1216 02:59:09.459268       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 02:59:09.458841       1 config.go:309] "Starting node config controller"
	I1216 02:59:09.459094       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 02:59:09.459279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 02:59:09.459344       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 02:59:09.459405       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 02:59:09.559637       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 02:59:09.560884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 02:59:09.560909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788] <==
	E1216 02:59:01.005640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 02:59:01.005795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:59:01.007411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:59:01.006503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:59:01.006576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:59:01.006654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 02:59:01.006685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 02:59:01.006725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:59:01.006791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:59:01.007492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:59:01.007569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:59:01.006006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:59:01.007842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 02:59:01.007852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:59:01.817418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 02:59:01.822324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:59:01.881238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 02:59:02.025139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:59:02.035202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:59:02.044799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:59:02.088291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:59:02.105981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:59:02.180064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:59:02.210915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1216 02:59:03.598091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 02:59:04 pause-837191 kubelet[1320]: E1216 02:59:04.403207    1320 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-837191\" already exists" pod="kube-system/kube-scheduler-pause-837191"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.429779    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-837191" podStartSLOduration=1.429757964 podStartE2EDuration="1.429757964s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.427395626 +0000 UTC m=+1.260841767" watchObservedRunningTime="2025-12-16 02:59:04.429757964 +0000 UTC m=+1.263204104"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.453499    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-837191" podStartSLOduration=1.453475562 podStartE2EDuration="1.453475562s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.443064462 +0000 UTC m=+1.276510584" watchObservedRunningTime="2025-12-16 02:59:04.453475562 +0000 UTC m=+1.286921698"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.471978    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-837191" podStartSLOduration=1.471919415 podStartE2EDuration="1.471919415s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.455784419 +0000 UTC m=+1.289230560" watchObservedRunningTime="2025-12-16 02:59:04.471919415 +0000 UTC m=+1.305365554"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.472159    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-837191" podStartSLOduration=1.472150327 podStartE2EDuration="1.472150327s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.471449964 +0000 UTC m=+1.304896105" watchObservedRunningTime="2025-12-16 02:59:04.472150327 +0000 UTC m=+1.305596468"
	Dec 16 02:59:07 pause-837191 kubelet[1320]: I1216 02:59:07.921326    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 02:59:07 pause-837191 kubelet[1320]: I1216 02:59:07.922165    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959584    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c9s\" (UniqueName: \"kubernetes.io/projected/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-kube-api-access-28c9s\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959674    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-lib-modules\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959707    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-lib-modules\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959729    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-cni-cfg\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959752    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-kube-proxy\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959778    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-xtables-lock\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959800    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-xtables-lock\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959834    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rrk8\" (UniqueName: \"kubernetes.io/projected/4e91fbf8-3a12-4de8-a517-2b92db440ff1-kube-api-access-2rrk8\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:09 pause-837191 kubelet[1320]: I1216 02:59:09.430850    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fmvd7" podStartSLOduration=1.430813608 podStartE2EDuration="1.430813608s" podCreationTimestamp="2025-12-16 02:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:09.420694121 +0000 UTC m=+6.254140264" watchObservedRunningTime="2025-12-16 02:59:09.430813608 +0000 UTC m=+6.264259750"
	Dec 16 02:59:09 pause-837191 kubelet[1320]: I1216 02:59:09.430968    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wcl5f" podStartSLOduration=1.430960378 podStartE2EDuration="1.430960378s" podCreationTimestamp="2025-12-16 02:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:09.430625972 +0000 UTC m=+6.264072114" watchObservedRunningTime="2025-12-16 02:59:09.430960378 +0000 UTC m=+6.264406521"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.009580    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.140247    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ce58deb-ddcb-4423-84e1-fa3a3fd0417c-config-volume\") pod \"coredns-66bc5c9577-pjnck\" (UID: \"3ce58deb-ddcb-4423-84e1-fa3a3fd0417c\") " pod="kube-system/coredns-66bc5c9577-pjnck"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.140292    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7nv4\" (UniqueName: \"kubernetes.io/projected/3ce58deb-ddcb-4423-84e1-fa3a3fd0417c-kube-api-access-p7nv4\") pod \"coredns-66bc5c9577-pjnck\" (UID: \"3ce58deb-ddcb-4423-84e1-fa3a3fd0417c\") " pod="kube-system/coredns-66bc5c9577-pjnck"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.454130    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pjnck" podStartSLOduration=11.454106785 podStartE2EDuration="11.454106785s" podCreationTimestamp="2025-12-16 02:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:20.454074445 +0000 UTC m=+17.287520585" watchObservedRunningTime="2025-12-16 02:59:20.454106785 +0000 UTC m=+17.287552926"
	Dec 16 02:59:31 pause-837191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 02:59:31 pause-837191 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 02:59:31 pause-837191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:59:31 pause-837191 systemd[1]: kubelet.service: Consumed 1.202s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-837191 -n pause-837191
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-837191 -n pause-837191: exit status 2 (359.665529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-837191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-837191
helpers_test.go:244: (dbg) docker inspect pause-837191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71",
	        "Created": "2025-12-16T02:58:42.695717357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:58:43.165384346Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/hostname",
	        "HostsPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/hosts",
	        "LogPath": "/var/lib/docker/containers/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71/64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71-json.log",
	        "Name": "/pause-837191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-837191:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-837191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64b06a5a566570891349e910d11697e582cb7a7c4df4b70ef31d0e7b52ebbd71",
	                "LowerDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1531633d300fa4c6dc09f2dc61f12884fbd8e6802f076bd3eff12eed5099e05e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-837191",
	                "Source": "/var/lib/docker/volumes/pause-837191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-837191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-837191",
	                "name.minikube.sigs.k8s.io": "pause-837191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3bf8b98c76f0422d2025ddd996cd4d08a9fa597def55f7aff705bfe1caae86c1",
	            "SandboxKey": "/var/run/docker/netns/3bf8b98c76f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-837191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8a06604d014961f2f4dab4932da9bc10e4eabf846faad30337573f8dda24095",
	                    "EndpointID": "ab272c4dc124edf219acb74b2f8ebbc028ab8ce5ed98ddd1f9ee93c1b919781a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "36:38:b1:41:8b:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-837191",
	                        "64b06a5a5665"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-837191 -n pause-837191
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-837191 -n pause-837191: exit status 2 (337.480581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-837191 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │ 16 Dec 25 02:57 UTC │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │                     │
	│ stop    │ -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:57 UTC │ 16 Dec 25 02:57 UTC │
	│ delete  │ -p scheduled-stop-708409                                                                                                                                                                                                  │ scheduled-stop-708409       │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:58 UTC │
	│ start   │ -p insufficient-storage-058217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-058217 │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │                     │
	│ delete  │ -p insufficient-storage-058217                                                                                                                                                                                            │ insufficient-storage-058217 │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:58 UTC │
	│ start   │ -p pause-837191 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p force-systemd-env-849216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-849216    │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p offline-crio-827391 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-827391         │ jenkins │ v1.37.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p stopped-upgrade-863865 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-863865      │ jenkins │ v1.35.0 │ 16 Dec 25 02:58 UTC │ 16 Dec 25 02:59 UTC │
	│ delete  │ -p force-systemd-env-849216                                                                                                                                                                                               │ force-systemd-env-849216    │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ stop    │ stopped-upgrade-863865 stop                                                                                                                                                                                               │ stopped-upgrade-863865      │ jenkins │ v1.35.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p force-systemd-flag-546137 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p stopped-upgrade-863865 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-863865      │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ delete  │ -p offline-crio-827391                                                                                                                                                                                                    │ offline-crio-827391         │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p cert-expiration-332150 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-332150      │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ start   │ -p pause-837191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ pause   │ -p pause-837191 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-837191                │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	│ ssh     │ force-systemd-flag-546137 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ delete  │ -p force-systemd-flag-546137                                                                                                                                                                                              │ force-systemd-flag-546137   │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │ 16 Dec 25 02:59 UTC │
	│ start   │ -p cert-options-436902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-436902         │ jenkins │ v1.37.0 │ 16 Dec 25 02:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:59:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:59:33.997758  213079 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:59:33.998022  213079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:33.998026  213079 out.go:374] Setting ErrFile to fd 2...
	I1216 02:59:33.998029  213079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:59:33.998213  213079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:59:33.998681  213079 out.go:368] Setting JSON to false
	I1216 02:59:33.999748  213079 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2526,"bootTime":1765851448,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:59:33.999791  213079 start.go:143] virtualization: kvm guest
	I1216 02:59:34.001808  213079 out.go:179] * [cert-options-436902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:59:34.003080  213079 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:59:34.003111  213079 notify.go:221] Checking for updates...
	I1216 02:59:34.005567  213079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:59:34.007031  213079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:59:34.008264  213079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:59:34.009407  213079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:59:34.010634  213079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:59:34.012426  213079 config.go:182] Loaded profile config "cert-expiration-332150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:34.012627  213079 config.go:182] Loaded profile config "pause-837191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:59:34.012734  213079 config.go:182] Loaded profile config "stopped-upgrade-863865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 02:59:34.012846  213079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:59:34.038955  213079 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:59:34.039084  213079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:59:34.104861  213079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 02:59:34.093913758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:59:34.104975  213079 docker.go:319] overlay module found
	I1216 02:59:34.106635  213079 out.go:179] * Using the docker driver based on user configuration
	I1216 02:59:34.107923  213079 start.go:309] selected driver: docker
	I1216 02:59:34.107934  213079 start.go:927] validating driver "docker" against <nil>
	I1216 02:59:34.107948  213079 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:59:34.108553  213079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:59:34.170575  213079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 02:59:34.160412495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:59:34.170729  213079 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:59:34.170955  213079 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:59:34.172525  213079 out.go:179] * Using Docker driver with root privileges
	I1216 02:59:34.173699  213079 cni.go:84] Creating CNI manager for ""
	I1216 02:59:34.173754  213079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:59:34.173759  213079 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:59:34.173836  213079 start.go:353] cluster config:
	{Name:cert-options-436902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-436902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInter
val:1m0s}
	I1216 02:59:34.175306  213079 out.go:179] * Starting "cert-options-436902" primary control-plane node in "cert-options-436902" cluster
	I1216 02:59:34.176500  213079 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 02:59:34.177665  213079 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:59:34.178833  213079 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:59:34.178859  213079 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:59:34.178873  213079 cache.go:65] Caching tarball of preloaded images
	I1216 02:59:34.178875  213079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:59:34.178973  213079 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 02:59:34.178982  213079 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 02:59:34.179085  213079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-options-436902/config.json ...
	I1216 02:59:34.179098  213079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-options-436902/config.json: {Name:mkf3bb0c2bcd1d6926a24e8e7e7e762fc124f7e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:34.201330  213079 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:59:34.201340  213079 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:59:34.201355  213079 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:59:34.201381  213079 start.go:360] acquireMachinesLock for cert-options-436902: {Name:mk7b601b016ecc3f0ea574ddbe535db86635a347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:59:34.201466  213079 start.go:364] duration metric: took 73.527µs to acquireMachinesLock for "cert-options-436902"
	I1216 02:59:34.201492  213079 start.go:93] Provisioning new machine with config: &{Name:cert-options-436902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-436902 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:59:34.201549  213079 start.go:125] createHost starting for "" (driver="docker")
	I1216 02:59:31.596163  208317 cli_runner.go:164] Run: docker network inspect cert-expiration-332150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:59:31.616370  208317 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 02:59:31.620890  208317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:59:31.631259  208317 kubeadm.go:884] updating cluster {Name:cert-expiration-332150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-332150 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:59:31.631424  208317 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:59:31.631485  208317 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:59:31.665103  208317 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:59:31.665115  208317 crio.go:433] Images already preloaded, skipping extraction
	I1216 02:59:31.665180  208317 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:59:31.695092  208317 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:59:31.695105  208317 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:59:31.695113  208317 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 02:59:31.695226  208317 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-332150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-332150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:59:31.695298  208317 ssh_runner.go:195] Run: crio config
	I1216 02:59:31.744519  208317 cni.go:84] Creating CNI manager for ""
	I1216 02:59:31.744537  208317 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 02:59:31.744556  208317 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:59:31.744583  208317 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-332150 NodeName:cert-expiration-332150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:59:31.744751  208317 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-332150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:59:31.744852  208317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 02:59:31.754808  208317 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:59:31.754909  208317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:59:31.764233  208317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1216 02:59:31.778291  208317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 02:59:31.795479  208317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 02:59:31.809210  208317 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:59:31.813483  208317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:59:31.824310  208317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:59:31.907153  208317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:59:31.930376  208317 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150 for IP: 192.168.94.2
	I1216 02:59:31.930388  208317 certs.go:195] generating shared ca certs ...
	I1216 02:59:31.930405  208317 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:31.930561  208317 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 02:59:31.930609  208317 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 02:59:31.930616  208317 certs.go:257] generating profile certs ...
	I1216 02:59:31.930683  208317 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.key
	I1216 02:59:31.930702  208317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.crt with IP's: []
	I1216 02:59:31.967049  208317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.crt ...
	I1216 02:59:31.967073  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.crt: {Name:mk0a350071d485b89a629cfce2bb39c66ab7d437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:31.967320  208317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.key ...
	I1216 02:59:31.967336  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/client.key: {Name:mkeb06f869fcc3c28d22bda51365f595b995e271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:31.967449  208317 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key.bb27f40d
	I1216 02:59:31.967467  208317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt.bb27f40d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 02:59:32.028017  208317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt.bb27f40d ...
	I1216 02:59:32.028031  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt.bb27f40d: {Name:mka9f4c14c71ca04cee52c66ae8c4894ed1457d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:32.028207  208317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key.bb27f40d ...
	I1216 02:59:32.028214  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key.bb27f40d: {Name:mk4548d9bed769a832a41895beffb13eab100c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:32.028285  208317 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt.bb27f40d -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt
	I1216 02:59:32.028393  208317 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key.bb27f40d -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key
	I1216 02:59:32.028462  208317 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.key
	I1216 02:59:32.028473  208317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.crt with IP's: []
	I1216 02:59:32.132565  208317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.crt ...
	I1216 02:59:32.132586  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.crt: {Name:mk918e28ffa3a5ac30cc9d3eaf329d3c9fbc6e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:32.132811  208317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.key ...
	I1216 02:59:32.132835  208317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.key: {Name:mkac3fd14f8d44e33e187e6f2b742fbf9a780e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:59:32.133041  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 02:59:32.133076  208317 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 02:59:32.133081  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 02:59:32.133102  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 02:59:32.133122  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:59:32.133150  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 02:59:32.133190  208317 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 02:59:32.133710  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:59:32.151634  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 02:59:32.168969  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:59:32.187204  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:59:32.207036  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 02:59:32.226251  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:59:32.243673  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:59:32.260769  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/cert-expiration-332150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 02:59:32.278121  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 02:59:32.297097  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:59:32.315449  208317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 02:59:32.334299  208317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:59:32.347211  208317 ssh_runner.go:195] Run: openssl version
	I1216 02:59:32.354865  208317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 02:59:32.362205  208317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 02:59:32.369709  208317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 02:59:32.373731  208317 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 02:59:32.373764  208317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 02:59:32.408319  208317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:59:32.417519  208317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 02:59:32.424847  208317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:32.431899  208317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:59:32.438900  208317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:32.442411  208317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:32.442452  208317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:59:32.477646  208317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:59:32.485089  208317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 02:59:32.493195  208317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 02:59:32.500269  208317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 02:59:32.507617  208317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 02:59:32.511300  208317 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 02:59:32.511349  208317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 02:59:32.546891  208317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:59:32.554691  208317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 02:59:32.562418  208317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:59:32.566099  208317 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 02:59:32.566139  208317 kubeadm.go:401] StartCluster: {Name:cert-expiration-332150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-332150 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:59:32.566196  208317 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:59:32.566245  208317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:59:32.592625  208317 cri.go:89] found id: ""
	I1216 02:59:32.592678  208317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:59:32.601008  208317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:59:32.608490  208317 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 02:59:32.608537  208317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:59:32.616129  208317 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:59:32.616141  208317 kubeadm.go:158] found existing configuration files:
	
	I1216 02:59:32.616185  208317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 02:59:32.623556  208317 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:59:32.623595  208317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:59:32.630859  208317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 02:59:32.638376  208317 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:59:32.638411  208317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:59:32.646257  208317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 02:59:32.653537  208317 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:59:32.653569  208317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:59:32.660515  208317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 02:59:32.667645  208317 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:59:32.667675  208317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:59:32.674925  208317 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 02:59:32.747063  208317 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 02:59:32.808241  208317 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.552446296Z" level=info msg="RDT not available in the host system"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.552457699Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553317303Z" level=info msg="Conmon does support the --sync option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553401894Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.553434354Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.554181162Z" level=info msg="Conmon does support the --sync option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.554197931Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55812229Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.5581581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55868454Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"
/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri
]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.559052674Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.55911071Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.637781517Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-pjnck Namespace:kube-system ID:7ca09997d61fb843ffb635a880e2b79da5d3971af9ca71f3a3329e3d3657cc1a UID:3ce58deb-ddcb-4423-84e1-fa3a3fd0417c NetNS:/var/run/netns/64054fce-d6a7-4c40-abf6-fa0cbe5b0333 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006a0300}] Aliases:map[]}"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638013832Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-pjnck for CNI network kindnet (type=ptp)"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638482181Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638505956Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638557186Z" level=info msg="Create NRI interface"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638675408Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638691461Z" level=info msg="runtime interface created"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638701258Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638706977Z" level=info msg="runtime interface starting up..."
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638712628Z" level=info msg="starting plugins..."
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.638724811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:59:27 pause-837191 crio[2169]: time="2025-12-16T02:59:27.639030881Z" level=info msg="No systemd watchdog enabled"
	Dec 16 02:59:27 pause-837191 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bf2d431585316       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   7ca09997d61fb       coredns-66bc5c9577-pjnck               kube-system
	b844e62002d16       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   ad13d00f0e700       kindnet-wcl5f                          kube-system
	88215e5886950       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   26 seconds ago      Running             kube-proxy                0                   b2da3e312f4be       kube-proxy-fmvd7                       kube-system
	5b9d7480573c6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   37 seconds ago      Running             etcd                      0                   3284bd3a120f2       etcd-pause-837191                      kube-system
	1d892921aadff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   37 seconds ago      Running             kube-scheduler            0                   6cef185e99a30       kube-scheduler-pause-837191            kube-system
	b3a30f62410fc       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   37 seconds ago      Running             kube-controller-manager   0                   8fa19cc77a23a       kube-controller-manager-pause-837191   kube-system
	8611286a63a13       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   37 seconds ago      Running             kube-apiserver            0                   d6b64b96ce57f       kube-apiserver-pause-837191            kube-system
	
	
	==> coredns [bf2d43158531635936153e35498027f85dac2c9d92d2a0fc1c48c72773fdfc76] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40251 - 9667 "HINFO IN 8510780704647402415.8346462452358273346. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019912735s
	
	
	==> describe nodes <==
	Name:               pause-837191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-837191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-837191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T02_59_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 02:59:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-837191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 02:59:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:58:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 02:59:24 +0000   Tue, 16 Dec 2025 02:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-837191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                2e52ea69-2427-4b82-be68-cbc8774b0719
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pjnck                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-837191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-wcl5f                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-837191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-837191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-fmvd7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-837191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-837191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-837191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-837191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-837191 event: Registered Node pause-837191 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-837191 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [5b9d7480573c6636e2e3a391994a40dfc711d0c4e6fcbeb6672b80027565f2a1] <==
	{"level":"warn","ts":"2025-12-16T02:59:00.007364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.020078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.032810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.046032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.066521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.078720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.090479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.102151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.114541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.126475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.138321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.147007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.157639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.167846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.180997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.192552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.208013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.212611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.221691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.230254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T02:59:00.291042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42064","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T02:59:24.246079Z","caller":"traceutil/trace.go:172","msg":"trace[195916779] linearizableReadLoop","detail":"{readStateIndex:416; appliedIndex:416; }","duration":"131.782438ms","start":"2025-12-16T02:59:24.114274Z","end":"2025-12-16T02:59:24.246057Z","steps":["trace[195916779] 'read index received'  (duration: 131.772263ms)","trace[195916779] 'applied index is now lower than readState.Index'  (duration: 8.47µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T02:59:24.246214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.911256ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:59:24.246262Z","caller":"traceutil/trace.go:172","msg":"trace[596979243] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"231.146257ms","start":"2025-12-16T02:59:24.015100Z","end":"2025-12-16T02:59:24.246246Z","steps":["trace[596979243] 'process raft request'  (duration: 231.039068ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:59:24.246287Z","caller":"traceutil/trace.go:172","msg":"trace[1115945735] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:402; }","duration":"132.008762ms","start":"2025-12-16T02:59:24.114269Z","end":"2025-12-16T02:59:24.246278Z","steps":["trace[1115945735] 'agreement among raft nodes before linearized reading'  (duration: 131.866429ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:59:36 up 42 min,  0 user,  load average: 4.83, 2.07, 1.34
	Linux pause-837191 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b844e62002d1615c3f7e7a89b2acf6de0af8614c0c7cca7e2885af4a6ba3a0d2] <==
	I1216 02:59:09.504441       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 02:59:09.504695       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 02:59:09.504884       1 main.go:148] setting mtu 1500 for CNI 
	I1216 02:59:09.504902       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 02:59:09.504939       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T02:59:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 02:59:09.707874       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 02:59:09.707897       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 02:59:09.707920       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 02:59:09.708087       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 02:59:10.102246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 02:59:10.102279       1 metrics.go:72] Registering metrics
	I1216 02:59:10.102746       1 controller.go:711] "Syncing nftables rules"
	I1216 02:59:19.707925       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 02:59:19.707986       1 main.go:301] handling current node
	I1216 02:59:29.711513       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 02:59:29.711571       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8611286a63a130f61845da98343f94d94d7a2f6bb2d895ef06c389f57d9c11aa] <==
	I1216 02:59:01.025967       1 policy_source.go:240] refreshing policies
	E1216 02:59:01.028323       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1216 02:59:01.079568       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 02:59:01.087871       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:01.088502       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 02:59:01.096446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:01.098454       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 02:59:01.227509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 02:59:01.874668       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 02:59:01.878849       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 02:59:01.879237       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 02:59:02.379636       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 02:59:02.413683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 02:59:02.489873       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 02:59:02.497163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1216 02:59:02.498430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 02:59:02.502792       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 02:59:02.904669       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 02:59:03.359782       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 02:59:03.388506       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 02:59:03.418935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 02:59:08.355232       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 02:59:08.806600       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:08.810353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 02:59:08.854323       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b3a30f62410fca50e714e57aaf10e73ede2ea14f906c88fdfb4b48e64594cad5] <==
	I1216 02:59:07.860107       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-837191" podCIDRs=["10.244.0.0/24"]
	I1216 02:59:07.875830       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 02:59:07.877258       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 02:59:07.902013       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 02:59:07.902034       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 02:59:07.902054       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 02:59:07.902067       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 02:59:07.903220       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 02:59:07.903249       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 02:59:07.903270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 02:59:07.903275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 02:59:07.903306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 02:59:07.903341       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 02:59:07.903371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 02:59:07.903382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 02:59:07.903375       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 02:59:07.903462       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 02:59:07.903543       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 02:59:07.907153       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:59:07.907238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 02:59:07.908335       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 02:59:07.911851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 02:59:07.918196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 02:59:07.929455       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 02:59:22.855551       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [88215e5886950e140a7ff17f1db589734c05090b77ed751818f3f1d5a4c3bd38] <==
	I1216 02:59:09.271923       1 server_linux.go:53] "Using iptables proxy"
	I1216 02:59:09.328449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 02:59:09.429261       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 02:59:09.429299       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 02:59:09.429383       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 02:59:09.448736       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 02:59:09.448798       1 server_linux.go:132] "Using iptables Proxier"
	I1216 02:59:09.453995       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 02:59:09.454385       1 server.go:527] "Version info" version="v1.34.2"
	I1216 02:59:09.454449       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 02:59:09.458660       1 config.go:106] "Starting endpoint slice config controller"
	I1216 02:59:09.459240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 02:59:09.458678       1 config.go:200] "Starting service config controller"
	I1216 02:59:09.459268       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 02:59:09.458841       1 config.go:309] "Starting node config controller"
	I1216 02:59:09.459094       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 02:59:09.459279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 02:59:09.459344       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 02:59:09.459405       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 02:59:09.559637       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 02:59:09.560884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 02:59:09.560909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d892921aadff5d7982b5f0a3c22519e237473e3650490fb4b559b2217603788] <==
	E1216 02:59:01.005640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 02:59:01.005795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:59:01.007411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:59:01.006503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:59:01.006576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:59:01.006654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 02:59:01.006685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 02:59:01.006725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:59:01.006791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:59:01.007492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:59:01.007569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:59:01.006006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:59:01.007842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 02:59:01.007852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:59:01.817418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 02:59:01.822324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:59:01.881238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 02:59:02.025139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:59:02.035202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:59:02.044799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:59:02.088291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:59:02.105981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:59:02.180064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:59:02.210915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1216 02:59:03.598091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 02:59:04 pause-837191 kubelet[1320]: E1216 02:59:04.403207    1320 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-837191\" already exists" pod="kube-system/kube-scheduler-pause-837191"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.429779    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-837191" podStartSLOduration=1.429757964 podStartE2EDuration="1.429757964s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.427395626 +0000 UTC m=+1.260841767" watchObservedRunningTime="2025-12-16 02:59:04.429757964 +0000 UTC m=+1.263204104"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.453499    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-837191" podStartSLOduration=1.453475562 podStartE2EDuration="1.453475562s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.443064462 +0000 UTC m=+1.276510584" watchObservedRunningTime="2025-12-16 02:59:04.453475562 +0000 UTC m=+1.286921698"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.471978    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-837191" podStartSLOduration=1.471919415 podStartE2EDuration="1.471919415s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.455784419 +0000 UTC m=+1.289230560" watchObservedRunningTime="2025-12-16 02:59:04.471919415 +0000 UTC m=+1.305365554"
	Dec 16 02:59:04 pause-837191 kubelet[1320]: I1216 02:59:04.472159    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-837191" podStartSLOduration=1.472150327 podStartE2EDuration="1.472150327s" podCreationTimestamp="2025-12-16 02:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:04.471449964 +0000 UTC m=+1.304896105" watchObservedRunningTime="2025-12-16 02:59:04.472150327 +0000 UTC m=+1.305596468"
	Dec 16 02:59:07 pause-837191 kubelet[1320]: I1216 02:59:07.921326    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 02:59:07 pause-837191 kubelet[1320]: I1216 02:59:07.922165    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959584    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c9s\" (UniqueName: \"kubernetes.io/projected/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-kube-api-access-28c9s\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959674    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-lib-modules\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959707    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-lib-modules\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959729    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-cni-cfg\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959752    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-kube-proxy\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959778    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bf1decc-e3b6-4a2c-bbf0-a652a9508a51-xtables-lock\") pod \"kube-proxy-fmvd7\" (UID: \"4bf1decc-e3b6-4a2c-bbf0-a652a9508a51\") " pod="kube-system/kube-proxy-fmvd7"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959800    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e91fbf8-3a12-4de8-a517-2b92db440ff1-xtables-lock\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:08 pause-837191 kubelet[1320]: I1216 02:59:08.959834    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rrk8\" (UniqueName: \"kubernetes.io/projected/4e91fbf8-3a12-4de8-a517-2b92db440ff1-kube-api-access-2rrk8\") pod \"kindnet-wcl5f\" (UID: \"4e91fbf8-3a12-4de8-a517-2b92db440ff1\") " pod="kube-system/kindnet-wcl5f"
	Dec 16 02:59:09 pause-837191 kubelet[1320]: I1216 02:59:09.430850    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fmvd7" podStartSLOduration=1.430813608 podStartE2EDuration="1.430813608s" podCreationTimestamp="2025-12-16 02:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:09.420694121 +0000 UTC m=+6.254140264" watchObservedRunningTime="2025-12-16 02:59:09.430813608 +0000 UTC m=+6.264259750"
	Dec 16 02:59:09 pause-837191 kubelet[1320]: I1216 02:59:09.430968    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wcl5f" podStartSLOduration=1.430960378 podStartE2EDuration="1.430960378s" podCreationTimestamp="2025-12-16 02:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:09.430625972 +0000 UTC m=+6.264072114" watchObservedRunningTime="2025-12-16 02:59:09.430960378 +0000 UTC m=+6.264406521"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.009580    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.140247    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ce58deb-ddcb-4423-84e1-fa3a3fd0417c-config-volume\") pod \"coredns-66bc5c9577-pjnck\" (UID: \"3ce58deb-ddcb-4423-84e1-fa3a3fd0417c\") " pod="kube-system/coredns-66bc5c9577-pjnck"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.140292    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7nv4\" (UniqueName: \"kubernetes.io/projected/3ce58deb-ddcb-4423-84e1-fa3a3fd0417c-kube-api-access-p7nv4\") pod \"coredns-66bc5c9577-pjnck\" (UID: \"3ce58deb-ddcb-4423-84e1-fa3a3fd0417c\") " pod="kube-system/coredns-66bc5c9577-pjnck"
	Dec 16 02:59:20 pause-837191 kubelet[1320]: I1216 02:59:20.454130    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pjnck" podStartSLOduration=11.454106785 podStartE2EDuration="11.454106785s" podCreationTimestamp="2025-12-16 02:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:59:20.454074445 +0000 UTC m=+17.287520585" watchObservedRunningTime="2025-12-16 02:59:20.454106785 +0000 UTC m=+17.287552926"
	Dec 16 02:59:31 pause-837191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 02:59:31 pause-837191 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 02:59:31 pause-837191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:59:31 pause-837191 systemd[1]: kubelet.service: Consumed 1.202s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-837191 -n pause-837191
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-837191 -n pause-837191: exit status 2 (345.121688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-837191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.284284ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:04:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-073001 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-073001 describe deploy/metrics-server -n kube-system: exit status 1 (66.490813ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-073001 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-073001
helpers_test.go:244: (dbg) docker inspect old-k8s-version-073001:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	        "Created": "2025-12-16T03:03:54.698671723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:03:54.737387037Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d-json.log",
	        "Name": "/old-k8s-version-073001",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-073001:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-073001",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	                "LowerDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-073001",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-073001/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-073001",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "432bb26fc11ab4855fb197b2e1980a4e17e91426dc3c122fe3c783c3aaaa18ab",
	            "SandboxKey": "/var/run/docker/netns/432bb26fc11a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-073001": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5dccd8a47ad3460508b5e229ec860f06a2e52bc9489d8882cbbf26ed9824ada8",
	                    "EndpointID": "05faf17cffe5e048dbd3553cf610316266190952289c4fb4be4bfa48a45e653f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:59:73:1f:84:5c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-073001",
	                        "76d012974e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25: (1.22015981s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-646016 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo docker system info                                                                                                                                                                                                      │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo containerd config dump                                                                                                                                                                                                  │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo crio config                                                                                                                                                                                                             │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p cilium-646016                                                                                                                                                                                                                              │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-073001 │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-027639    │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                        │ NoKubernetes-027639    │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-307185      │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-073001 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:03:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:03:56.983492  266278 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:03:56.983587  266278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:56.983599  266278 out.go:374] Setting ErrFile to fd 2...
	I1216 03:03:56.983606  266278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:56.983800  266278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:03:56.984344  266278 out.go:368] Setting JSON to false
	I1216 03:03:56.985440  266278 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2789,"bootTime":1765851448,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:03:56.985498  266278 start.go:143] virtualization: kvm guest
	I1216 03:03:56.987509  266278 out.go:179] * [no-preload-307185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:03:56.989006  266278 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:03:56.989008  266278 notify.go:221] Checking for updates...
	I1216 03:03:56.991516  266278 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:03:56.992646  266278 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:03:56.993773  266278 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:03:56.994992  266278 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:03:56.996003  266278 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:03:56.997638  266278 config.go:182] Loaded profile config "kubernetes-upgrade-058433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:03:56.997737  266278 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:03:56.997803  266278 config.go:182] Loaded profile config "running-upgrade-146373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 03:03:56.997957  266278 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:03:57.022553  266278 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:03:57.022679  266278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:03:57.077939  266278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:03:57.067316279 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:03:57.078050  266278 docker.go:319] overlay module found
	I1216 03:03:57.079834  266278 out.go:179] * Using the docker driver based on user configuration
	I1216 03:03:57.081152  266278 start.go:309] selected driver: docker
	I1216 03:03:57.081167  266278 start.go:927] validating driver "docker" against <nil>
	I1216 03:03:57.081178  266278 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:03:57.081715  266278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:03:57.138179  266278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:03:57.128436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:03:57.138343  266278 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:03:57.138544  266278 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:03:57.140263  266278 out.go:179] * Using Docker driver with root privileges
	I1216 03:03:57.141488  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:03:57.141558  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:03:57.141568  266278 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:03:57.141625  266278 start.go:353] cluster config:
	{Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:03:57.142997  266278 out.go:179] * Starting "no-preload-307185" primary control-plane node in "no-preload-307185" cluster
	I1216 03:03:57.144118  266278 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:03:57.145253  266278 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:03:57.146353  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:03:57.146455  266278 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:03:57.146467  266278 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json ...
	I1216 03:03:57.146501  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json: {Name:mk19c39507f62b1421041e099e0fa2ad8af7d345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:03:57.146643  266278 cache.go:107] acquiring lock: {Name:mk9c043df005d5db5fe4723c7121f40ea0f1812e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146679  266278 cache.go:107] acquiring lock: {Name:mkdf57b3d7d678135b23a9c051c86f85f24445d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146706  266278 cache.go:107] acquiring lock: {Name:mkdd9488923482e72919ad32bb6f5b3b308df98d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146709  266278 cache.go:107] acquiring lock: {Name:mk85875299d4b06a340bacb43fc637fd3eac0534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146765  266278 cache.go:107] acquiring lock: {Name:mke4bbadab765c4e0f220f70570523f5ea9b2203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146810  266278 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:03:57.146837  266278 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:03:57.146845  266278 cache.go:107] acquiring lock: {Name:mk4b159c6dc596e5ca3ffca7550c82c8dbbfcee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146874  266278 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:03:57.146894  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1216 03:03:57.146908  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 03:03:57.146912  266278 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 70.033µs
	I1216 03:03:57.146800  266278 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:03:57.146928  266278 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 03:03:57.146920  266278 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 246.615µs
	I1216 03:03:57.146939  266278 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 03:03:57.146938  266278 cache.go:107] acquiring lock: {Name:mk515b27b0b3a5786bafab82ddd54f4df9a8b6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146648  266278 cache.go:107] acquiring lock: {Name:mkb5a2a6366f972707bdae2fa0fdae7fc7a4a37e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.147107  266278 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:03:57.147118  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:03:57.147129  266278 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 505.432µs
	I1216 03:03:57.147138  266278 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:03:57.148062  266278 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:03:57.148060  266278 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:03:57.148061  266278 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:03:57.148061  266278 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:03:57.148062  266278 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:03:57.169114  266278 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:03:57.169139  266278 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:03:57.169160  266278 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:03:57.169194  266278 start.go:360] acquireMachinesLock for no-preload-307185: {Name:mk94feb63e5fbefef1b2772890835ef937ceebef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.169298  266278 start.go:364] duration metric: took 84.161µs to acquireMachinesLock for "no-preload-307185"
	I1216 03:03:57.169330  266278 start.go:93] Provisioning new machine with config: &{Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:03:57.169436  266278 start.go:125] createHost starting for "" (driver="docker")
	W1216 03:03:52.938249  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:52.941147  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:52.941162  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:53.015905  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:53.015939  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:53.058020  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:53.058049  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:53.093123  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:53.093146  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:55.631896  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:03:55.632410  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:03:55.632467  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:55.632536  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:55.675087  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:55.675108  224341 cri.go:89] found id: ""
	I1216 03:03:55.675116  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:03:55.675168  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.679048  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:55.679114  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:55.718858  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:55.718881  224341 cri.go:89] found id: ""
	I1216 03:03:55.718891  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:03:55.718957  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.723103  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:55.723161  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:55.760013  224341 cri.go:89] found id: ""
	I1216 03:03:55.760038  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.760049  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:03:55.760056  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:55.760111  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:55.801849  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:55.801873  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:55.801879  224341 cri.go:89] found id: ""
	I1216 03:03:55.801888  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:03:55.801945  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.805756  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.809415  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:55.809473  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:55.845426  224341 cri.go:89] found id: ""
	I1216 03:03:55.845452  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.845464  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:55.845472  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:55.845527  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:55.882578  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:55.882604  224341 cri.go:89] found id: ""
	I1216 03:03:55.882613  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:03:55.882676  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.886726  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:55.886786  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:55.926694  224341 cri.go:89] found id: ""
	I1216 03:03:55.926716  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.926724  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:55.926732  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:55.926786  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:55.963541  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:55.963566  224341 cri.go:89] found id: ""
	I1216 03:03:55.963577  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:03:55.963635  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.967619  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:55.967640  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:56.083090  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:03:56.083123  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:56.132143  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:56.132173  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:56.204730  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:56.204758  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:56.246643  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:56.246673  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:56.279900  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:56.279925  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:56.316599  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:56.316630  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:56.332057  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:56.332082  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:56.389756  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:56.389775  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:03:56.389787  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:56.426453  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:03:56.426479  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:56.459761  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:56.459784  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:55.388959  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:03:55.389383  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:03:55.389435  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:55.389482  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:55.420377  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:55.420409  233647 cri.go:89] found id: ""
	I1216 03:03:55.420416  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:03:55.420470  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.424737  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:55.424811  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:55.454011  233647 cri.go:89] found id: ""
	I1216 03:03:55.454034  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.454044  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:03:55.454050  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:55.454102  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:55.484241  233647 cri.go:89] found id: ""
	I1216 03:03:55.484281  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.484293  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:03:55.484301  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:55.484366  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:55.516302  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:55.516334  233647 cri.go:89] found id: ""
	I1216 03:03:55.516346  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:03:55.516404  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.520637  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:55.520701  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:55.551334  233647 cri.go:89] found id: ""
	I1216 03:03:55.551363  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.551375  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:55.551388  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:55.551443  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:55.582038  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:55.582057  233647 cri.go:89] found id: ""
	I1216 03:03:55.582064  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:03:55.582106  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.586264  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:55.586335  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:55.617074  233647 cri.go:89] found id: ""
	I1216 03:03:55.617099  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.617107  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:55.617113  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:55.617194  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:55.651370  233647 cri.go:89] found id: ""
	I1216 03:03:55.651398  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.651409  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:03:55.651421  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:55.651437  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:55.725358  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:03:55.725390  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:55.759551  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:55.759591  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:55.852078  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:55.852114  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:55.866931  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:55.866958  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:55.928907  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:55.928929  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:03:55.928944  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:55.962427  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:03:55.962456  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:55.992620  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:03:55.992649  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:54.623181  263091 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-073001:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (5.020901186s)
	I1216 03:03:54.623215  263091 kic.go:203] duration metric: took 5.021054298s to extract preloaded images to volume ...
	W1216 03:03:54.623327  263091 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:03:54.623370  263091 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:03:54.623421  263091 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:03:54.681873  263091 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-073001 --name old-k8s-version-073001 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-073001 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-073001 --network old-k8s-version-073001 --ip 192.168.103.2 --volume old-k8s-version-073001:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:03:54.964409  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Running}}
	I1216 03:03:54.987030  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.008874  263091 cli_runner.go:164] Run: docker exec old-k8s-version-073001 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:03:55.055890  263091 oci.go:144] the created container "old-k8s-version-073001" has a running status.
	I1216 03:03:55.055922  263091 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa...
	I1216 03:03:55.128834  263091 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:03:55.155140  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.177927  263091 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:03:55.177952  263091 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-073001 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:03:55.229398  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.254475  263091 machine.go:94] provisionDockerMachine start ...
	I1216 03:03:55.254607  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:55.283793  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:55.284342  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:55.284386  263091 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:03:55.285961  263091 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38330->127.0.0.1:33058: read: connection reset by peer
	I1216 03:03:58.456343  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-073001
	
	I1216 03:03:58.456407  263091 ubuntu.go:182] provisioning hostname "old-k8s-version-073001"
	I1216 03:03:58.456479  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:58.484352  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.484793  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:58.484905  263091 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-073001 && echo "old-k8s-version-073001" | sudo tee /etc/hostname
	I1216 03:03:58.650136  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-073001
	
	I1216 03:03:58.650230  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:58.671013  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.671334  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:58.671367  263091 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-073001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-073001/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-073001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:03:58.815635  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:03:58.815663  263091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:03:58.815707  263091 ubuntu.go:190] setting up certificates
	I1216 03:03:58.815720  263091 provision.go:84] configureAuth start
	I1216 03:03:58.815795  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:58.836594  263091 provision.go:143] copyHostCerts
	I1216 03:03:58.836655  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:03:58.836668  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:03:58.836748  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:03:58.836866  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:03:58.836877  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:03:58.836912  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:03:58.836990  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:03:58.836999  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:03:58.837032  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:03:58.837089  263091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-073001 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-073001]
	I1216 03:03:59.007674  263091 provision.go:177] copyRemoteCerts
	I1216 03:03:59.007734  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:03:59.007768  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.027988  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.129477  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:03:59.152712  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 03:03:59.172888  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:03:59.192518  263091 provision.go:87] duration metric: took 376.770342ms to configureAuth
	I1216 03:03:59.192548  263091 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:03:59.192725  263091 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:03:59.192814  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.212927  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:59.213250  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:59.213271  263091 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:03:59.500792  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:03:59.500840  263091 machine.go:97] duration metric: took 4.246308922s to provisionDockerMachine
	I1216 03:03:59.500854  263091 client.go:176] duration metric: took 10.476226918s to LocalClient.Create
	I1216 03:03:59.500871  263091 start.go:167] duration metric: took 10.476282253s to libmachine.API.Create "old-k8s-version-073001"
	I1216 03:03:59.500880  263091 start.go:293] postStartSetup for "old-k8s-version-073001" (driver="docker")
	I1216 03:03:59.500893  263091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:03:59.500987  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:03:59.501036  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.520589  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.622986  263091 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:03:59.626690  263091 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:03:59.626728  263091 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:03:59.626741  263091 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:03:59.626796  263091 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:03:59.626958  263091 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:03:59.627089  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:03:59.635782  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:03:59.656175  263091 start.go:296] duration metric: took 155.280035ms for postStartSetup
	I1216 03:03:59.656535  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:59.674339  263091 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/config.json ...
	I1216 03:03:59.674668  263091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:03:59.674735  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.693605  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.791302  263091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:03:59.795977  263091 start.go:128] duration metric: took 10.774398328s to createHost
	I1216 03:03:59.796001  263091 start.go:83] releasing machines lock for "old-k8s-version-073001", held for 10.774640668s
	I1216 03:03:59.796084  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:59.814619  263091 ssh_runner.go:195] Run: cat /version.json
	I1216 03:03:59.814644  263091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:03:59.814665  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.814733  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.834525  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.835504  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.981381  263091 ssh_runner.go:195] Run: systemctl --version
	I1216 03:03:59.987796  263091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:04:00.021679  263091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:04:00.026871  263091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:04:00.026942  263091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:04:00.053045  263091 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:04:00.053078  263091 start.go:496] detecting cgroup driver to use...
	I1216 03:04:00.053113  263091 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:04:00.053172  263091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:04:00.068945  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:04:00.080545  263091 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:04:00.080600  263091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:04:00.096421  263091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:04:00.113104  263091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:04:00.196023  263091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:04:00.281153  263091 docker.go:234] disabling docker service ...
	I1216 03:04:00.281211  263091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:04:00.300014  263091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:04:00.313878  263091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:04:00.397412  263091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:04:00.481876  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:04:00.494121  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:04:00.508320  263091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1216 03:04:00.508377  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.518381  263091 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:04:00.518454  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.527521  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.536040  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.544510  263091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:04:00.552319  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.560698  263091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.573795  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.581942  263091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:04:00.590002  263091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:04:00.597208  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:00.677744  263091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:04:00.889093  263091 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:04:00.889166  263091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:04:00.893069  263091 start.go:564] Will wait 60s for crictl version
	I1216 03:04:00.893115  263091 ssh_runner.go:195] Run: which crictl
	I1216 03:04:00.896568  263091 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:04:00.920645  263091 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:04:00.920708  263091 ssh_runner.go:195] Run: crio --version
	I1216 03:04:00.947453  263091 ssh_runner.go:195] Run: crio --version
	I1216 03:04:00.976522  263091 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1216 03:03:57.171991  266278 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:03:57.172231  266278 start.go:159] libmachine.API.Create for "no-preload-307185" (driver="docker")
	I1216 03:03:57.172278  266278 client.go:173] LocalClient.Create starting
	I1216 03:03:57.172336  266278 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:03:57.172375  266278 main.go:143] libmachine: Decoding PEM data...
	I1216 03:03:57.172407  266278 main.go:143] libmachine: Parsing certificate...
	I1216 03:03:57.172475  266278 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:03:57.172503  266278 main.go:143] libmachine: Decoding PEM data...
	I1216 03:03:57.172519  266278 main.go:143] libmachine: Parsing certificate...
	I1216 03:03:57.172867  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:03:57.191305  266278 cli_runner.go:211] docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:03:57.191369  266278 network_create.go:284] running [docker network inspect no-preload-307185] to gather additional debugging logs...
	I1216 03:03:57.191392  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185
	W1216 03:03:57.208540  266278 cli_runner.go:211] docker network inspect no-preload-307185 returned with exit code 1
	I1216 03:03:57.208570  266278 network_create.go:287] error running [docker network inspect no-preload-307185]: docker network inspect no-preload-307185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-307185 not found
	I1216 03:03:57.208580  266278 network_create.go:289] output of [docker network inspect no-preload-307185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-307185 not found
	
	** /stderr **
	I1216 03:03:57.208657  266278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:03:57.227426  266278 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:03:57.228255  266278 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:03:57.230458  266278 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:03:57.231065  266278 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d7bad883e2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:16:93:66:19:b2} reservation:<nil>}
	I1216 03:03:57.231526  266278 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bbdfab3d6d3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d6:5a:a2:42:00:d9} reservation:<nil>}
	I1216 03:03:57.232342  266278 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240e720}
	I1216 03:03:57.232364  266278 network_create.go:124] attempt to create docker network no-preload-307185 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 03:03:57.232416  266278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-307185 no-preload-307185
	I1216 03:03:57.280720  266278 network_create.go:108] docker network no-preload-307185 192.168.94.0/24 created
	I1216 03:03:57.280753  266278 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-307185" container
	I1216 03:03:57.280836  266278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:03:57.298554  266278 cli_runner.go:164] Run: docker volume create no-preload-307185 --label name.minikube.sigs.k8s.io=no-preload-307185 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:03:57.300921  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:03:57.310882  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:03:57.315262  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:03:57.317224  266278 oci.go:103] Successfully created a docker volume no-preload-307185
	I1216 03:03:57.317284  266278 cli_runner.go:164] Run: docker run --rm --name no-preload-307185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307185 --entrypoint /usr/bin/test -v no-preload-307185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:03:57.318500  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:03:57.324309  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:03:57.739389  266278 oci.go:107] Successfully prepared a docker volume no-preload-307185
	I1216 03:03:57.739478  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1216 03:03:57.739560  266278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:03:57.739598  266278 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:03:57.739639  266278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:03:57.796477  266278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-307185 --name no-preload-307185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-307185 --network no-preload-307185 --ip 192.168.94.2 --volume no-preload-307185:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:03:57.846529  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 03:03:57.846561  266278 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 699.930372ms
	I1216 03:03:57.846577  266278 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 03:03:58.070647  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Running}}
	I1216 03:03:58.089307  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.108349  266278 cli_runner.go:164] Run: docker exec no-preload-307185 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:03:58.153356  266278 oci.go:144] the created container "no-preload-307185" has a running status.
	I1216 03:03:58.153383  266278 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa...
	I1216 03:03:58.196279  266278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:03:58.228249  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.247407  266278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:03:58.247425  266278 kic_runner.go:114] Args: [docker exec --privileged no-preload-307185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:03:58.288115  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.312221  266278 machine.go:94] provisionDockerMachine start ...
	I1216 03:03:58.312319  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:03:58.333144  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.333595  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:03:58.333613  266278 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:03:58.334580  266278 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35744->127.0.0.1:33063: read: connection reset by peer
	I1216 03:03:58.477476  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 03:03:58.477521  266278 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.330658915s
	I1216 03:03:58.477545  266278 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 03:03:58.580964  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 03:03:58.581001  266278 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.434304663s
	I1216 03:03:58.581020  266278 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 03:03:58.607058  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 03:03:58.607089  266278 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.460409486s
	I1216 03:03:58.607104  266278 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 03:03:58.613940  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 03:03:58.613971  266278 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.467270523s
	I1216 03:03:58.613984  266278 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 03:03:58.613998  266278 cache.go:87] Successfully saved all images to host disk.
	I1216 03:04:01.474151  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307185
	
	I1216 03:04:01.474182  266278 ubuntu.go:182] provisioning hostname "no-preload-307185"
	I1216 03:04:01.474247  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.493078  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:01.493279  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:01.493291  266278 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307185 && echo "no-preload-307185" | sudo tee /etc/hostname
	I1216 03:04:01.638433  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307185
	
	I1216 03:04:01.638534  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.657194  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:01.657441  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:01.657466  266278 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:04:01.798237  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:04:01.798276  266278 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:04:01.798306  266278 ubuntu.go:190] setting up certificates
	I1216 03:04:01.798325  266278 provision.go:84] configureAuth start
	I1216 03:04:01.798383  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:01.819725  266278 provision.go:143] copyHostCerts
	I1216 03:04:01.819800  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:04:01.819831  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:04:01.819926  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:04:01.820050  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:04:01.820061  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:04:01.820092  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:04:01.820173  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:04:01.820184  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:04:01.820222  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:04:01.820293  266278 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.no-preload-307185 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-307185]
	I1216 03:04:01.860212  266278 provision.go:177] copyRemoteCerts
	I1216 03:04:01.860275  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:04:01.860325  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.882953  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:00.977648  263091 cli_runner.go:164] Run: docker network inspect old-k8s-version-073001 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:04:00.995145  263091 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 03:04:00.999654  263091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:01.010173  263091 kubeadm.go:884] updating cluster {Name:old-k8s-version-073001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:04:01.010302  263091 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 03:04:01.010342  263091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:01.039297  263091 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:04:01.039315  263091 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:04:01.039356  263091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:01.064295  263091 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:04:01.064318  263091 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:04:01.064325  263091 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1216 03:04:01.064420  263091 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-073001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:04:01.064521  263091 ssh_runner.go:195] Run: crio config
	I1216 03:04:01.111617  263091 cni.go:84] Creating CNI manager for ""
	I1216 03:04:01.111639  263091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:01.111658  263091 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:04:01.111677  263091 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-073001 NodeName:old-k8s-version-073001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:04:01.111801  263091 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-073001"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:04:01.111882  263091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1216 03:04:01.120315  263091 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:04:01.120386  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:04:01.128158  263091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1216 03:04:01.140872  263091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:04:01.156027  263091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1216 03:04:01.169049  263091 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:04:01.172565  263091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:01.182276  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:01.257167  263091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:01.280128  263091 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001 for IP: 192.168.103.2
	I1216 03:04:01.280147  263091 certs.go:195] generating shared ca certs ...
	I1216 03:04:01.280161  263091 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.280326  263091 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:04:01.280379  263091 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:04:01.280393  263091 certs.go:257] generating profile certs ...
	I1216 03:04:01.280451  263091 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key
	I1216 03:04:01.280479  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt with IP's: []
	I1216 03:04:01.425484  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt ...
	I1216 03:04:01.425510  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: {Name:mkf3a97c40568c5da3dda20123f4fc0fbbbff9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.425672  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key ...
	I1216 03:04:01.425689  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key: {Name:mk95a16ee8f617246fdcb4f60fa48de82ac6ac5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.425769  263091 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e
	I1216 03:04:01.425787  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 03:04:01.512209  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e ...
	I1216 03:04:01.512238  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e: {Name:mk0cbda35d36fb3fc71fcbe38ba1d3cc195a5c18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.512402  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e ...
	I1216 03:04:01.512426  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e: {Name:mkf56f104ddb198bf3a0bef363952da2f9a9ac80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.512509  263091 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt
	I1216 03:04:01.512587  263091 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key
	I1216 03:04:01.512651  263091 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key
	I1216 03:04:01.512669  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt with IP's: []
	I1216 03:04:01.569012  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt ...
	I1216 03:04:01.569039  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt: {Name:mk63e7a75f98b5aa22fbfa8098ca980a7e4c9675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.569238  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key ...
	I1216 03:04:01.569259  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key: {Name:mkf4d398a37db0a29ab34e32185a5e96ebd560d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.569490  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:04:01.569534  263091 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:04:01.569546  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:04:01.569575  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:04:01.569603  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:04:01.569629  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:04:01.569687  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:01.570352  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:04:01.589543  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:04:01.606382  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:04:01.623276  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:04:01.641474  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 03:04:01.659990  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1216 03:04:01.678537  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:04:01.697868  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:04:01.717445  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:04:01.741011  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:04:01.759643  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:04:01.779794  263091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:04:01.792350  263091 ssh_runner.go:195] Run: openssl version
	I1216 03:04:01.798991  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.807501  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:04:01.815525  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.819507  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.819556  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.862037  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:04:01.870951  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:04:01.879762  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.887671  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:04:01.896061  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.900463  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.900523  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.936042  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:04:01.943747  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:04:01.951160  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.959036  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:04:01.966588  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.970635  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.970694  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:04:02.013479  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:02.021983  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:02.029777  263091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:04:02.033458  263091 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:04:02.033531  263091 kubeadm.go:401] StartCluster: {Name:old-k8s-version-073001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:04:02.033632  263091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:04:02.033690  263091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:04:02.064462  263091 cri.go:89] found id: ""
	I1216 03:04:02.064525  263091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:04:02.074100  263091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:02.082707  263091 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:02.082771  263091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:02.091243  263091 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:02.091264  263091 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:02.091309  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:02.099254  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:02.099315  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:02.108014  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:02.118590  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:02.118644  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:02.127619  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:02.136734  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:02.136805  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:02.146173  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:02.153969  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:02.154021  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:02.162764  263091 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:02.206957  263091 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1216 03:04:02.207063  263091 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:02.244835  263091 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:02.244926  263091 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:04:02.245096  263091 kubeadm.go:319] OS: Linux
	I1216 03:04:02.245186  263091 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:02.245263  263091 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:02.245334  263091 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:02.245424  263091 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:02.245510  263091 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:02.245593  263091 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:02.245676  263091 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:02.245739  263091 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:04:02.317952  263091 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:02.318083  263091 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:02.318198  263091 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 03:04:02.478593  263091 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:03:59.011887  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:03:59.012379  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:03:59.012446  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:59.012502  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:59.052877  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:59.052900  224341 cri.go:89] found id: ""
	I1216 03:03:59.052911  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:03:59.052971  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.057387  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:59.057450  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:59.097631  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:59.097657  224341 cri.go:89] found id: ""
	I1216 03:03:59.097666  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:03:59.097712  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.101698  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:59.101767  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:59.139530  224341 cri.go:89] found id: ""
	I1216 03:03:59.139550  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.139557  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:03:59.139562  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:59.139624  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:59.178228  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:59.178251  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:59.178256  224341 cri.go:89] found id: ""
	I1216 03:03:59.178268  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:03:59.178331  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.182126  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.185622  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:59.185688  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:59.222091  224341 cri.go:89] found id: ""
	I1216 03:03:59.222118  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.222128  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:59.222137  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:59.222199  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:59.256676  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:59.256703  224341 cri.go:89] found id: ""
	I1216 03:03:59.256713  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:03:59.256769  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.260657  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:59.260733  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:59.295566  224341 cri.go:89] found id: ""
	I1216 03:03:59.295589  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.295601  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:59.295609  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:59.295672  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:59.330182  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:59.330210  224341 cri.go:89] found id: ""
	I1216 03:03:59.330220  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:03:59.330283  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.333999  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:59.334026  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:59.368515  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:59.368540  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:59.466981  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:59.467013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:59.529687  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:59.529710  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:03:59.529725  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:59.578653  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:03:59.578681  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:59.612835  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:59.612862  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:59.665540  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:59.665571  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:59.704770  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:59.704795  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:59.720158  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:03:59.720183  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:59.757349  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:59.757377  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:59.834135  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:59.834176  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.377927  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:02.378373  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:02.378505  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:02.378564  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:02.418869  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:02.418897  224341 cri.go:89] found id: ""
	I1216 03:04:02.418908  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:02.419068  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.423024  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:02.423067  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:02.465875  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:02.465901  224341 cri.go:89] found id: ""
	I1216 03:04:02.465912  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:02.465977  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.470098  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:02.470182  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:02.508022  224341 cri.go:89] found id: ""
	I1216 03:04:02.508046  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.508056  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:02.508076  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:02.508181  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:02.546221  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:02.546243  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.546249  224341 cri.go:89] found id: ""
	I1216 03:04:02.546256  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:02.546299  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.551252  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.555153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:02.555221  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:02.591843  224341 cri.go:89] found id: ""
	I1216 03:04:02.591869  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.591880  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:02.591889  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:02.591950  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:02.628765  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:02.628783  224341 cri.go:89] found id: ""
	I1216 03:04:02.628791  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:02.628860  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.632557  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:02.632618  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:02.667221  224341 cri.go:89] found id: ""
	I1216 03:04:02.667242  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.667252  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:02.667259  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:02.667307  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:02.707718  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:02.707741  224341 cri.go:89] found id: ""
	I1216 03:04:02.707753  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:02.707810  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.712492  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:02.712519  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:02.731813  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:02.731855  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:02.808682  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:02.808712  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.857865  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:02.857899  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:02.901942  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:02.901969  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:58.523674  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:03:58.524025  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:03:58.524069  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:58.524355  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:58.567521  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:58.567542  233647 cri.go:89] found id: ""
	I1216 03:03:58.567552  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:03:58.567607  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.572445  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:58.572507  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:58.611526  233647 cri.go:89] found id: ""
	I1216 03:03:58.611551  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.611563  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:03:58.611570  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:58.611625  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:58.640517  233647 cri.go:89] found id: ""
	I1216 03:03:58.640546  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.640559  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:03:58.640568  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:58.640632  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:58.671980  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:58.672001  233647 cri.go:89] found id: ""
	I1216 03:03:58.672010  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:03:58.672061  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.676210  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:58.676278  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:58.705573  233647 cri.go:89] found id: ""
	I1216 03:03:58.705598  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.705607  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:58.705613  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:58.705658  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:58.734318  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:58.734341  233647 cri.go:89] found id: ""
	I1216 03:03:58.734350  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:03:58.734415  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.738863  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:58.738929  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:58.767548  233647 cri.go:89] found id: ""
	I1216 03:03:58.767576  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.767588  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:58.767595  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:58.767650  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:58.794764  233647 cri.go:89] found id: ""
	I1216 03:03:58.794793  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.794805  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:03:58.794829  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:58.794844  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:58.865251  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:03:58.865285  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:58.896449  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:58.896473  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:58.985656  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:58.985692  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:59.000758  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:59.000792  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:59.067015  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:59.067040  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:03:59.067057  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:59.099073  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:03:59.099101  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:59.127092  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:03:59.127131  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:01.657881  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:01.658302  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:01.658359  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:01.658408  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:01.687548  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:01.687566  233647 cri.go:89] found id: ""
	I1216 03:04:01.687573  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:01.687619  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.691370  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:01.691434  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:01.720852  233647 cri.go:89] found id: ""
	I1216 03:04:01.720875  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.720885  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:01.720891  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:01.720947  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:01.748699  233647 cri.go:89] found id: ""
	I1216 03:04:01.748720  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.748727  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:01.748733  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:01.748849  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:01.777606  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:01.777632  233647 cri.go:89] found id: ""
	I1216 03:04:01.777643  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:01.777697  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.781795  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:01.781879  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:01.810401  233647 cri.go:89] found id: ""
	I1216 03:04:01.810425  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.810436  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:01.810444  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:01.810493  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:01.839874  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:01.839896  233647 cri.go:89] found id: ""
	I1216 03:04:01.839906  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:01.839962  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.843981  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:01.844034  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:01.874012  233647 cri.go:89] found id: ""
	I1216 03:04:01.874033  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.874041  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:01.874047  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:01.874097  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:01.903251  233647 cri.go:89] found id: ""
	I1216 03:04:01.903274  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.903284  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:01.903295  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:01.903312  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:01.959343  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:01.959365  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:01.959380  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:01.991099  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:01.991124  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:02.019324  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:02.019365  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:02.047245  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:02.047268  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:02.108580  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:02.108615  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:02.143712  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:02.143748  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:02.236341  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:02.236379  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:02.481144  263091 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:02.481263  263091 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:02.481391  263091 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:02.595419  263091 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:04:02.721379  263091 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:04:02.830290  263091 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:04:02.981811  263091 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:04:03.220599  263091 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:04:03.220845  263091 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-073001] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 03:04:03.405917  263091 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:04:03.406077  263091 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-073001] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 03:04:03.572863  263091 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:04:03.676885  263091 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:04:03.800541  263091 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:04:03.800689  263091 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:01.983664  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:04:02.003111  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:04:02.022667  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:04:02.041627  266278 provision.go:87] duration metric: took 243.280062ms to configureAuth
	I1216 03:04:02.041664  266278 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:04:02.041884  266278 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:04:02.042032  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.063325  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:02.063612  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:02.063643  266278 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:04:02.362962  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:04:02.362988  266278 machine.go:97] duration metric: took 4.050745794s to provisionDockerMachine
	I1216 03:04:02.362998  266278 client.go:176] duration metric: took 5.190713111s to LocalClient.Create
	I1216 03:04:02.363018  266278 start.go:167] duration metric: took 5.190812798s to libmachine.API.Create "no-preload-307185"
	I1216 03:04:02.363028  266278 start.go:293] postStartSetup for "no-preload-307185" (driver="docker")
	I1216 03:04:02.363043  266278 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:04:02.363102  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:04:02.363150  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.384989  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.490391  266278 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:04:02.494223  266278 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:04:02.494252  266278 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:04:02.494263  266278 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:04:02.494324  266278 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:04:02.494420  266278 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:04:02.494543  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:04:02.503099  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:02.524353  266278 start.go:296] duration metric: took 161.309598ms for postStartSetup
	I1216 03:04:02.524734  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:02.546606  266278 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json ...
	I1216 03:04:02.546934  266278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:04:02.546975  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.567006  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.664959  266278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:04:02.669805  266278 start.go:128] duration metric: took 5.500351228s to createHost
	I1216 03:04:02.669847  266278 start.go:83] releasing machines lock for "no-preload-307185", held for 5.500531479s
	I1216 03:04:02.669912  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:02.691505  266278 ssh_runner.go:195] Run: cat /version.json
	I1216 03:04:02.691557  266278 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:04:02.691576  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.691641  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.712618  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.713291  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.868756  266278 ssh_runner.go:195] Run: systemctl --version
	I1216 03:04:02.876448  266278 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:04:02.915259  266278 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:04:02.920342  266278 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:04:02.920421  266278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:04:02.951095  266278 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:04:02.951115  266278 start.go:496] detecting cgroup driver to use...
	I1216 03:04:02.951156  266278 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:04:02.951205  266278 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:04:02.968904  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:04:02.981039  266278 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:04:02.981094  266278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:04:02.998032  266278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:04:03.016854  266278 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:04:03.098050  266278 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:04:03.195156  266278 docker.go:234] disabling docker service ...
	I1216 03:04:03.195229  266278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:04:03.216952  266278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:04:03.231602  266278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:04:03.334295  266278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:04:03.416425  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:04:03.428983  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:04:03.443622  266278 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:04:03.443684  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.453618  266278 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:04:03.453686  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.462440  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.470809  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.479005  266278 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:04:03.487343  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.496270  266278 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.509284  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.517739  266278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:04:03.525047  266278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:04:03.533369  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:03.620574  266278 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:04:03.762684  266278 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:04:03.762754  266278 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:04:03.766975  266278 start.go:564] Will wait 60s for crictl version
	I1216 03:04:03.767035  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:03.771264  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:04:03.795139  266278 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:04:03.795225  266278 ssh_runner.go:195] Run: crio --version
	I1216 03:04:03.823118  266278 ssh_runner.go:195] Run: crio --version
	I1216 03:04:03.852583  266278 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 03:04:04.032313  263091 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:04.299675  263091 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:04.372304  263091 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:04.492210  263091 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:04.493444  263091 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:04.498611  263091 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:03.853753  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:04:03.872620  266278 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 03:04:03.876915  266278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:03.887438  266278 kubeadm.go:884] updating cluster {Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:04:03.887555  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:04:03.887598  266278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:03.914376  266278 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 03:04:03.914396  266278 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:04:03.914471  266278 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:03.914485  266278 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:03.914506  266278 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:03.914519  266278 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1216 03:04:03.914507  266278 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:03.914542  266278 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:03.914490  266278 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:03.914586  266278 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:03.915772  266278 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:03.915775  266278 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:03.915777  266278 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:03.915777  266278 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:03.915847  266278 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:03.915854  266278 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:03.915774  266278 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:03.915802  266278 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1216 03:04:04.035591  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.035786  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.036396  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.043180  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.044017  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.057996  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1216 03:04:04.110145  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.129270  266278 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1216 03:04:04.129337  266278 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.129351  266278 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1216 03:04:04.129391  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129402  266278 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1216 03:04:04.129421  266278 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.129429  266278 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1216 03:04:04.129447  266278 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.129452  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129461  266278 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1216 03:04:04.129494  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129514  266278 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1216 03:04:04.129389  266278 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.129542  266278 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1216 03:04:04.129496  266278 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.129593  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129604  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129627  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.145126  266278 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1216 03:04:04.145170  266278 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.145170  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.145207  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.145213  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.145240  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.145273  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.145313  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.145326  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.183011  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.183041  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.183044  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.183097  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.186071  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.186141  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.186187  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.221175  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.223778  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.223939  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.224304  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.254705  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.254725  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.254731  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.254751  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.254767  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:04:04.254812  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.254864  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:04.254898  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.255381  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:04.255450  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:04.286435  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:04.286473  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1216 03:04:04.286513  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1216 03:04:04.286543  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:04.286556  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:04.286475  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.286580  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.286582  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1216 03:04:04.289665  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1216 03:04:04.289690  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1216 03:04:04.289735  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:04:04.289797  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:04.289872  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.289889  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1216 03:04:04.292897  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1216 03:04:04.292927  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1216 03:04:04.293218  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.293259  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1216 03:04:04.303848  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1216 03:04:04.303884  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1216 03:04:04.327062  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.327097  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1216 03:04:04.434191  266278 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.434263  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.876508  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:04.919490  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1216 03:04:04.919531  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.919564  266278 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 03:04:04.919601  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.919605  266278 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:04.919649  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.171069  266278 ssh_runner.go:235] Completed: which crictl: (1.251397441s)
	I1216 03:04:06.171119  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.251492887s)
	I1216 03:04:06.171140  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:06.171152  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1216 03:04:06.171181  266278 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:06.171231  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:02.940916  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:02.940942  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:02.995151  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:02.995180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:03.113309  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:03.113344  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:03.187967  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:03.187994  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:03.188013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:03.231777  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:03.231809  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:03.287675  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:03.287881  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:05.830947  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:05.831466  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:05.831537  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:05.831601  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:05.880129  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:05.880153  224341 cri.go:89] found id: ""
	I1216 03:04:05.880163  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:05.880217  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:05.886405  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:05.886498  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:05.938074  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:05.938105  224341 cri.go:89] found id: ""
	I1216 03:04:05.938116  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:05.938181  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:05.942956  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:05.943016  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:05.990225  224341 cri.go:89] found id: ""
	I1216 03:04:05.990258  224341 logs.go:282] 0 containers: []
	W1216 03:04:05.990270  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:05.990279  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:05.990337  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:06.032680  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:06.032712  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:06.032719  224341 cri.go:89] found id: ""
	I1216 03:04:06.032733  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:06.032794  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.037967  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.042517  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:06.042580  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:06.090023  224341 cri.go:89] found id: ""
	I1216 03:04:06.090048  224341 logs.go:282] 0 containers: []
	W1216 03:04:06.090059  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:06.090066  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:06.090137  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:06.138405  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:06.138429  224341 cri.go:89] found id: ""
	I1216 03:04:06.138439  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:06.138496  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.143153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:06.143216  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:06.184367  224341 cri.go:89] found id: ""
	I1216 03:04:06.184398  224341 logs.go:282] 0 containers: []
	W1216 03:04:06.184408  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:06.184421  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:06.184483  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:06.232665  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:06.232689  224341 cri.go:89] found id: ""
	I1216 03:04:06.232698  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:06.232754  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.237557  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:06.237580  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:06.361672  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:06.361707  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:06.428251  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:06.428288  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:06.490628  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:06.490661  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:06.564274  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:06.564304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:06.583599  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:06.583629  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:06.653831  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:06.653866  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:06.653893  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:06.698353  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:06.698383  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:06.794116  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:06.794151  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:06.840856  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:06.840891  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:06.889217  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:06.889251  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:04.755242  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:04.755653  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:04.755714  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:04.755768  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:04.785233  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:04.785257  233647 cri.go:89] found id: ""
	I1216 03:04:04.785270  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:04.785336  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.789247  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:04.789303  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:04.815637  233647 cri.go:89] found id: ""
	I1216 03:04:04.815666  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.815677  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:04.815687  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:04.815755  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:04.845854  233647 cri.go:89] found id: ""
	I1216 03:04:04.845883  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.845894  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:04.845902  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:04.845960  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:04.875863  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:04.875886  233647 cri.go:89] found id: ""
	I1216 03:04:04.875895  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:04.875960  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.880407  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:04.880477  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:04.910399  233647 cri.go:89] found id: ""
	I1216 03:04:04.910426  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.910436  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:04.910444  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:04.910496  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:04.939433  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:04.939456  233647 cri.go:89] found id: ""
	I1216 03:04:04.939466  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:04.939519  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.944067  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:04.944135  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:04.973700  233647 cri.go:89] found id: ""
	I1216 03:04:04.973728  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.973739  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:04.973746  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:04.973806  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:05.003004  233647 cri.go:89] found id: ""
	I1216 03:04:05.003033  233647 logs.go:282] 0 containers: []
	W1216 03:04:05.003045  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:05.003058  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:05.003074  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:05.016956  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:05.016984  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:05.072462  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:05.072483  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:05.072498  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:05.107385  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:05.107426  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:05.138573  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:05.138606  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:05.165268  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:05.165293  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:05.219542  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:05.219571  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:05.249525  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:05.249550  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:07.841896  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:04.500288  263091 out.go:252]   - Booting up control plane ...
	I1216 03:04:04.500430  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:04.500534  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:04.501573  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:04.522152  263091 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:04.523289  263091 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:04.523363  263091 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:04.678458  263091 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 03:04:09.180524  263091 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502212 seconds
	I1216 03:04:09.180751  263091 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:04:09.192727  263091 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:04:09.715333  263091 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:04:09.715610  263091 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-073001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:04:10.228888  263091 kubeadm.go:319] [bootstrap-token] Using token: srwvus.woqadb8emztifzee
	I1216 03:04:10.233941  263091 out.go:252]   - Configuring RBAC rules ...
	I1216 03:04:10.234089  263091 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:04:10.235385  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:04:10.242481  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:04:10.247236  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:04:10.250034  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:04:10.253571  263091 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:04:10.264298  263091 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:04:10.499294  263091 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:04:10.640713  263091 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:04:10.641879  263091 kubeadm.go:319] 
	I1216 03:04:10.641973  263091 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:04:10.641983  263091 kubeadm.go:319] 
	I1216 03:04:10.642076  263091 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:04:10.642087  263091 kubeadm.go:319] 
	I1216 03:04:10.642113  263091 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:04:10.642183  263091 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:04:10.642273  263091 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:04:10.642312  263091 kubeadm.go:319] 
	I1216 03:04:10.642412  263091 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:04:10.642422  263091 kubeadm.go:319] 
	I1216 03:04:10.642491  263091 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:04:10.642499  263091 kubeadm.go:319] 
	I1216 03:04:10.642573  263091 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:04:10.642682  263091 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:04:10.642789  263091 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:04:10.642799  263091 kubeadm.go:319] 
	I1216 03:04:10.642954  263091 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:04:10.643065  263091 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:04:10.643074  263091 kubeadm.go:319] 
	I1216 03:04:10.643216  263091 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token srwvus.woqadb8emztifzee \
	I1216 03:04:10.643350  263091 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:04:10.643385  263091 kubeadm.go:319] 	--control-plane 
	I1216 03:04:10.643396  263091 kubeadm.go:319] 
	I1216 03:04:10.643523  263091 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:04:10.643532  263091 kubeadm.go:319] 
	I1216 03:04:10.643632  263091 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token srwvus.woqadb8emztifzee \
	I1216 03:04:10.643758  263091 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:04:10.646150  263091 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:10.646335  263091 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:10.646356  263091 cni.go:84] Creating CNI manager for ""
	I1216 03:04:10.646364  263091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:10.647742  263091 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:04:07.661053  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.489799806s)
	I1216 03:04:07.661080  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1216 03:04:07.661103  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:07.661106  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.489944653s)
	I1216 03:04:07.661168  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:07.661171  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:08.867203  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.205931929s)
	I1216 03:04:08.867240  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.206038199s)
	I1216 03:04:08.867262  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1216 03:04:08.867288  266278 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:08.867335  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:08.867291  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:10.191168  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.323808142s)
	I1216 03:04:10.191196  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1216 03:04:10.191214  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:10.191227  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.323831368s)
	I1216 03:04:10.191258  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:10.191273  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:04:10.191367  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:11.480218  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.28893578s)
	I1216 03:04:11.480249  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1216 03:04:11.480273  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:11.480321  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:11.480338  266278 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.288951678s)
	I1216 03:04:11.480368  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 03:04:11.480395  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1216 03:04:09.438608  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:09.439033  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:09.439085  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:09.439138  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:09.491099  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:09.491234  224341 cri.go:89] found id: ""
	I1216 03:04:09.491263  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:09.491344  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.496741  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:09.496964  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:09.537646  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:09.537669  224341 cri.go:89] found id: ""
	I1216 03:04:09.537679  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:09.537734  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.542422  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:09.542509  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:09.584631  224341 cri.go:89] found id: ""
	I1216 03:04:09.584661  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.584671  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:09.584682  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:09.584737  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:09.621992  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:09.622025  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:09.622030  224341 cri.go:89] found id: ""
	I1216 03:04:09.622038  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:09.622090  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.626706  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.630966  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:09.631028  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:09.668513  224341 cri.go:89] found id: ""
	I1216 03:04:09.668545  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.668559  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:09.668567  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:09.668621  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:09.712724  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:09.712757  224341 cri.go:89] found id: ""
	I1216 03:04:09.712765  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:09.712838  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.717834  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:09.717902  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:09.755792  224341 cri.go:89] found id: ""
	I1216 03:04:09.755839  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.755851  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:09.755859  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:09.755921  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:09.792080  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:09.792107  224341 cri.go:89] found id: ""
	I1216 03:04:09.792119  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:09.792180  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.796182  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:09.796209  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:09.834786  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:09.834857  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:09.884561  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:09.884594  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:09.924698  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:09.924732  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:09.966756  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:09.966788  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:10.023237  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:10.023267  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:10.136733  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:10.136763  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:10.185983  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:10.186013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:10.272968  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:10.272995  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:10.322863  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:10.322918  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:10.338767  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:10.338795  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:10.398927  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:12.899876  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:12.900306  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:12.900364  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:12.900424  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:12.846642  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:04:12.846696  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:12.846749  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:12.878486  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:12.878503  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:12.878507  233647 cri.go:89] found id: ""
	I1216 03:04:12.878514  233647 logs.go:282] 2 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:12.878564  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.883115  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.886731  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:12.886783  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:12.914967  233647 cri.go:89] found id: ""
	I1216 03:04:12.914993  233647 logs.go:282] 0 containers: []
	W1216 03:04:12.915004  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:12.915011  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:12.915081  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:12.941248  233647 cri.go:89] found id: ""
	I1216 03:04:12.941275  233647 logs.go:282] 0 containers: []
	W1216 03:04:12.941288  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:12.941296  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:12.941354  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:12.970514  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:12.970536  233647 cri.go:89] found id: ""
	I1216 03:04:12.970545  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:12.970594  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.974652  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:12.974719  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:13.003021  233647 cri.go:89] found id: ""
	I1216 03:04:13.003045  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.003056  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:13.003064  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:13.003122  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:13.032068  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:13.032091  233647 cri.go:89] found id: ""
	I1216 03:04:13.032101  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:13.032163  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.036326  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:13.036387  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:13.065149  233647 cri.go:89] found id: ""
	I1216 03:04:13.065186  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.065195  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:13.065202  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:13.065257  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:13.099208  233647 cri.go:89] found id: ""
	I1216 03:04:13.099234  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.099245  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:13.099264  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:13.099278  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:10.649413  263091 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:04:10.654390  263091 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1216 03:04:10.654411  263091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:04:10.671034  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:04:11.460915  263091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:04:11.460994  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:11.461016  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-073001 minikube.k8s.io/updated_at=2025_12_16T03_04_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=old-k8s-version-073001 minikube.k8s.io/primary=true
	I1216 03:04:11.471518  263091 ops.go:34] apiserver oom_adj: -16
	I1216 03:04:11.545111  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.045339  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.546085  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:13.045258  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:13.546024  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.854610  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.374266272s)
	I1216 03:04:12.854646  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1216 03:04:12.854673  266278 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:12.854719  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:13.516876  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 03:04:13.516943  266278 cache_images.go:125] Successfully loaded all cached images
	I1216 03:04:13.516954  266278 cache_images.go:94] duration metric: took 9.602542369s to LoadCachedImages
	I1216 03:04:13.516970  266278 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 03:04:13.517082  266278 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:04:13.517191  266278 ssh_runner.go:195] Run: crio config
	I1216 03:04:13.574164  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:04:13.574184  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:13.574198  266278 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:04:13.574226  266278 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307185 NodeName:no-preload-307185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:04:13.574408  266278 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:04:13.574495  266278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:04:13.583770  266278 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1216 03:04:13.583848  266278 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:04:13.592159  266278 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1216 03:04:13.592259  266278 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1216 03:04:13.592275  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1216 03:04:13.592439  266278 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1216 03:04:13.596606  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1216 03:04:13.596628  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1216 03:04:14.551399  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:14.565793  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1216 03:04:14.570370  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1216 03:04:14.570406  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1216 03:04:14.730778  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1216 03:04:14.734841  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1216 03:04:14.734875  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1216 03:04:14.902938  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:04:14.911139  266278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 03:04:14.924210  266278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:04:15.027557  266278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1216 03:04:15.040925  266278 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:04:15.044751  266278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:15.113447  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:15.193177  266278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:15.222572  266278 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185 for IP: 192.168.94.2
	I1216 03:04:15.222592  266278 certs.go:195] generating shared ca certs ...
	I1216 03:04:15.222606  266278 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.222767  266278 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:04:15.222810  266278 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:04:15.222846  266278 certs.go:257] generating profile certs ...
	I1216 03:04:15.222923  266278 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key
	I1216 03:04:15.222936  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt with IP's: []
	I1216 03:04:15.239804  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt ...
	I1216 03:04:15.239839  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt: {Name:mkbb1d9d6d674b7216f912d7f18b1921d34f7eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.240043  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key ...
	I1216 03:04:15.240061  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key: {Name:mk1f823c374a6d2710b2ec138116bfc954bf1945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.240186  266278 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a
	I1216 03:04:15.240203  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 03:04:15.257410  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a ...
	I1216 03:04:15.257433  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a: {Name:mk355e0be250ac1cc67932cde908b24fd54a0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.257604  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a ...
	I1216 03:04:15.257620  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a: {Name:mkdd92510da8a63f303809f61444ea12cd95af40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.257726  266278 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt
	I1216 03:04:15.257833  266278 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key
	I1216 03:04:15.257940  266278 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key
	I1216 03:04:15.257958  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt with IP's: []
	I1216 03:04:15.347262  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt ...
	I1216 03:04:15.347294  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt: {Name:mke789521dd6396d588cece41e1ec6a2655c1c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.347489  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key ...
	I1216 03:04:15.347506  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key: {Name:mke97c7d39a77521fb29839f489063e708457adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.347711  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:04:15.347751  266278 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:04:15.347761  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:04:15.347795  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:04:15.347837  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:04:15.347868  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:04:15.347924  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:15.348673  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:04:15.367358  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:04:15.385345  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:04:15.403082  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:04:15.420424  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:04:15.438514  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:04:15.456421  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:04:15.474062  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:04:15.490934  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:04:15.511131  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:04:15.529088  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:04:15.546922  266278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:04:15.559505  266278 ssh_runner.go:195] Run: openssl version
	I1216 03:04:15.566808  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.574889  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:04:15.583165  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.587118  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.587168  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.627679  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:15.635693  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:15.643484  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.650955  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:04:15.658185  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.662078  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.662126  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.698607  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:04:15.706721  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:04:15.714158  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.721769  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:04:15.729188  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.732962  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.733014  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.767296  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:04:15.775556  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:04:15.783887  266278 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:04:15.787609  266278 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:04:15.787673  266278 kubeadm.go:401] StartCluster: {Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:04:15.787748  266278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:04:15.787865  266278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:04:15.814912  266278 cri.go:89] found id: ""
	I1216 03:04:15.814989  266278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:04:15.823123  266278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:15.831704  266278 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:15.831756  266278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:15.839707  266278 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:15.839728  266278 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:15.839763  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:15.847768  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:15.847843  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:15.854954  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:15.862885  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:15.862935  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:15.870270  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:15.878665  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:15.878715  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:15.886994  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:15.895104  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:15.895158  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:15.902577  266278 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:16.014568  266278 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:16.071084  266278 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:12.947630  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:12.947651  224341 cri.go:89] found id: ""
	I1216 03:04:12.947660  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:12.947718  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.951840  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:12.951912  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:12.989252  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:12.989270  224341 cri.go:89] found id: ""
	I1216 03:04:12.989277  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:12.989321  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.993130  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:12.993209  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:13.032791  224341 cri.go:89] found id: ""
	I1216 03:04:13.032815  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.032854  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:13.032868  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:13.032917  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:13.072341  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:13.072367  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:13.072373  224341 cri.go:89] found id: ""
	I1216 03:04:13.072382  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:13.072438  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.077091  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.080882  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:13.080954  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:13.125433  224341 cri.go:89] found id: ""
	I1216 03:04:13.125462  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.125474  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:13.125490  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:13.125554  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:13.170369  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:13.170392  224341 cri.go:89] found id: ""
	I1216 03:04:13.170400  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:13.170448  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.174306  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:13.174370  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:13.217698  224341 cri.go:89] found id: ""
	I1216 03:04:13.217733  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.217745  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:13.217753  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:13.217813  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:13.262721  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:13.262744  224341 cri.go:89] found id: ""
	I1216 03:04:13.262754  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:13.262837  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.267184  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:13.267211  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:13.334460  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:13.334482  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:13.334496  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:13.390840  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:13.390869  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:13.485577  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:13.485611  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:13.531410  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:13.531439  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:13.574060  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:13.574089  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:13.619699  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:13.619726  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:13.701623  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:13.701657  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:13.751906  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:13.751936  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:13.857738  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:13.857767  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:13.874169  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:13.874195  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.415876  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:16.416343  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:16.416403  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:16.416459  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:16.457702  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.457724  224341 cri.go:89] found id: ""
	I1216 03:04:16.457733  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:16.457785  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.461679  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:16.461751  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:16.495975  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:16.495995  224341 cri.go:89] found id: ""
	I1216 03:04:16.496002  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:16.496049  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.499688  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:16.499745  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:16.534113  224341 cri.go:89] found id: ""
	I1216 03:04:16.534137  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.534147  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:16.534153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:16.534201  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:16.569189  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:16.569216  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:16.569222  224341 cri.go:89] found id: ""
	I1216 03:04:16.569231  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:16.569300  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.573304  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.577186  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:16.577251  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:16.614893  224341 cri.go:89] found id: ""
	I1216 03:04:16.614925  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.614936  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:16.614943  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:16.615001  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:16.650342  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:16.650361  224341 cri.go:89] found id: ""
	I1216 03:04:16.650368  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:16.650427  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.654321  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:16.654379  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:16.687411  224341 cri.go:89] found id: ""
	I1216 03:04:16.687438  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.687446  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:16.687452  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:16.687508  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:16.727021  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:16.727044  224341 cri.go:89] found id: ""
	I1216 03:04:16.727053  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:16.727102  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.730811  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:16.730847  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:16.826431  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:16.826461  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:16.843225  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:16.843258  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:16.914349  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:16.914367  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:16.914384  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.951389  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:16.951415  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:16.997956  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:16.997988  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:17.031585  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:17.031610  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:17.073072  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:17.073099  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:17.155148  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:17.155180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:17.200150  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:17.200178  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:17.236104  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:17.236131  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:13.197046  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:13.197087  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:04:14.045978  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:14.545782  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:15.046061  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:15.545175  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:16.045445  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:16.546270  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:17.045249  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:17.546009  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:18.045466  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:18.546059  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:19.790165  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:19.790613  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:19.790671  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:19.790722  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:19.836240  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:19.836266  224341 cri.go:89] found id: ""
	I1216 03:04:19.836276  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:19.836333  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.840180  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:19.840256  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:19.876262  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:19.876282  224341 cri.go:89] found id: ""
	I1216 03:04:19.876291  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:19.876351  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.880702  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:19.880761  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:19.931372  224341 cri.go:89] found id: ""
	I1216 03:04:19.931400  224341 logs.go:282] 0 containers: []
	W1216 03:04:19.931411  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:19.931539  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:19.931639  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:19.981968  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:19.981994  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:19.982001  224341 cri.go:89] found id: ""
	I1216 03:04:19.982011  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:19.982058  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.985944  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.989995  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:19.990053  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:20.045000  224341 cri.go:89] found id: ""
	I1216 03:04:20.045029  224341 logs.go:282] 0 containers: []
	W1216 03:04:20.045038  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:20.045045  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:20.045118  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:20.087685  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:20.087710  224341 cri.go:89] found id: ""
	I1216 03:04:20.087721  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:20.087774  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:20.092446  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:20.092528  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:20.145165  224341 cri.go:89] found id: ""
	I1216 03:04:20.145190  224341 logs.go:282] 0 containers: []
	W1216 03:04:20.145203  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:20.145211  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:20.145270  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:20.190416  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:20.190442  224341 cri.go:89] found id: ""
	I1216 03:04:20.190453  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:20.190512  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:20.194873  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:20.194895  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:20.267295  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:20.267325  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:20.267337  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:20.305052  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:20.305083  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:20.353657  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:20.353689  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:20.433463  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:20.433494  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:20.475122  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:20.475157  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:20.510661  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:20.510690  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:20.544700  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:20.544722  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:20.589377  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:20.589405  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:20.688706  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:20.688736  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:20.705015  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:20.705040  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.602639  266278 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:23.602712  266278 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:23.602904  266278 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:23.603002  266278 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:04:23.603067  266278 kubeadm.go:319] OS: Linux
	I1216 03:04:23.603145  266278 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:23.603200  266278 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:23.603282  266278 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:23.603357  266278 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:23.603443  266278 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:23.603520  266278 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:23.603597  266278 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:23.603668  266278 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:04:23.603769  266278 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:23.603949  266278 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:23.604068  266278 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:23.604154  266278 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:23.606102  266278 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:23.606220  266278 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:23.606333  266278 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:23.606428  266278 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:04:23.606513  266278 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:04:23.606598  266278 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:04:23.606666  266278 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:04:23.606756  266278 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:04:23.606949  266278 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307185] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:04:23.607032  266278 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:04:23.607201  266278 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307185] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:04:23.607294  266278 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:04:23.607382  266278 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:04:23.607446  266278 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:04:23.607524  266278 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:23.607598  266278 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:23.607698  266278 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:23.607803  266278 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:23.607932  266278 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:23.608010  266278 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:23.608111  266278 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:23.608198  266278 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:23.609656  266278 out.go:252]   - Booting up control plane ...
	I1216 03:04:23.609777  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:23.609894  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:23.610004  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:23.610148  266278 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:23.610280  266278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:23.610427  266278 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:23.610538  266278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:23.610603  266278 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:23.610751  266278 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:23.610921  266278 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:23.611024  266278 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.330244ms
	I1216 03:04:23.611184  266278 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:04:23.611300  266278 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 03:04:23.611386  266278 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:04:23.611490  266278 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:04:23.611617  266278 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004939922s
	I1216 03:04:23.611724  266278 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.866682862s
	I1216 03:04:23.611834  266278 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001281569s
	I1216 03:04:23.611970  266278 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:04:23.612156  266278 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:04:23.612252  266278 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:04:23.612533  266278 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-307185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:04:23.612619  266278 kubeadm.go:319] [bootstrap-token] Using token: 9g2v5j.7sk8fy8x333gc5hf
	I1216 03:04:23.614196  266278 out.go:252]   - Configuring RBAC rules ...
	I1216 03:04:23.614321  266278 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:04:23.614437  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:04:23.614653  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:04:23.614869  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:04:23.615042  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:04:23.615171  266278 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:04:23.615326  266278 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:04:23.615393  266278 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:04:23.615455  266278 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:04:23.615462  266278 kubeadm.go:319] 
	I1216 03:04:23.615542  266278 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:04:23.615549  266278 kubeadm.go:319] 
	I1216 03:04:23.615656  266278 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:04:23.615674  266278 kubeadm.go:319] 
	I1216 03:04:23.615712  266278 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:04:23.615797  266278 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:04:23.615868  266278 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:04:23.615877  266278 kubeadm.go:319] 
	I1216 03:04:23.615950  266278 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:04:23.615958  266278 kubeadm.go:319] 
	I1216 03:04:23.616136  266278 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:04:23.616161  266278 kubeadm.go:319] 
	I1216 03:04:23.616231  266278 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:04:23.616354  266278 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:04:23.616452  266278 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:04:23.616459  266278 kubeadm.go:319] 
	I1216 03:04:23.616565  266278 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:04:23.616666  266278 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:04:23.616673  266278 kubeadm.go:319] 
	I1216 03:04:23.616781  266278 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9g2v5j.7sk8fy8x333gc5hf \
	I1216 03:04:23.616920  266278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:04:23.616948  266278 kubeadm.go:319] 	--control-plane 
	I1216 03:04:23.616955  266278 kubeadm.go:319] 
	I1216 03:04:23.617062  266278 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:04:23.617068  266278 kubeadm.go:319] 
	I1216 03:04:23.617176  266278 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9g2v5j.7sk8fy8x333gc5hf \
	I1216 03:04:23.617313  266278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:04:23.617330  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:04:23.617340  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:23.619028  266278 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:04:19.045699  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:19.546094  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:20.046043  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:20.546036  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:21.045497  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:21.545741  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:22.045475  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:22.545535  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.046011  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.545717  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.045308  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.545485  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.623099  263091 kubeadm.go:1114] duration metric: took 13.162156759s to wait for elevateKubeSystemPrivileges
	I1216 03:04:24.623139  263091 kubeadm.go:403] duration metric: took 22.589611877s to StartCluster
	I1216 03:04:24.623156  263091 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:24.623246  263091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:04:24.624669  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:24.624956  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:04:24.624949  263091 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:04:24.624979  263091 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:04:24.625052  263091 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-073001"
	I1216 03:04:24.625064  263091 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-073001"
	I1216 03:04:24.625073  263091 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-073001"
	I1216 03:04:24.625081  263091 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-073001"
	I1216 03:04:24.625104  263091 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:04:24.625134  263091 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:04:24.625495  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.625638  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.627386  263091 out.go:179] * Verifying Kubernetes components...
	I1216 03:04:24.628795  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:24.651412  263091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:24.652015  263091 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-073001"
	I1216 03:04:24.652059  263091 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:04:24.652552  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.653262  263091 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:24.653284  263091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:04:24.653335  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:04:24.680112  263091 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:24.680146  263091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:04:24.680222  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:04:24.686017  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:04:24.708030  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:04:24.746357  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:04:24.826483  263091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:24.828208  263091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:24.868724  263091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:25.091362  263091 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 03:04:25.352085  263091 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-073001" to be "Ready" ...
	I1216 03:04:25.359981  263091 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:04:23.620503  266278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:04:23.626089  266278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 03:04:23.626112  266278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:04:23.640346  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:04:23.887788  266278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:04:23.888050  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.888268  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-307185 minikube.k8s.io/updated_at=2025_12_16T03_04_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=no-preload-307185 minikube.k8s.io/primary=true
	I1216 03:04:23.900446  266278 ops.go:34] apiserver oom_adj: -16
	I1216 03:04:23.971454  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.472485  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.971793  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:25.472038  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:25.971985  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:26.472452  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:26.971792  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.259983  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:23.260507  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:23.260566  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:23.260626  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:23.300479  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:23.300508  224341 cri.go:89] found id: ""
	I1216 03:04:23.300519  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:23.300581  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.304563  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:23.304630  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:23.343023  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:23.343041  224341 cri.go:89] found id: ""
	I1216 03:04:23.343049  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:23.343095  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.347187  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:23.347258  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:23.386143  224341 cri.go:89] found id: ""
	I1216 03:04:23.386167  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.386175  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:23.386181  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:23.386233  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:23.435373  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:23.435401  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:23.435407  224341 cri.go:89] found id: ""
	I1216 03:04:23.435435  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:23.435497  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.440354  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.444807  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:23.444887  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:23.482204  224341 cri.go:89] found id: ""
	I1216 03:04:23.482232  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.482243  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:23.482250  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:23.482310  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:23.521654  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:23.521679  224341 cri.go:89] found id: ""
	I1216 03:04:23.521689  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:23.521748  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.526131  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:23.526197  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:23.562798  224341 cri.go:89] found id: ""
	I1216 03:04:23.562833  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.562844  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:23.562851  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:23.562912  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:23.601185  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:23.601601  224341 cri.go:89] found id: ""
	I1216 03:04:23.601635  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:23.601718  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.607333  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:23.607358  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:23.663321  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:23.663358  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:23.701580  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:23.701609  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.773301  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:23.773340  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:23.815567  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:23.815601  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:23.897998  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:23.898108  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:23.898128  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:23.948183  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:23.948223  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:23.989099  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:23.989135  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:24.104794  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:24.104844  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:24.126371  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:24.126588  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:24.180622  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:24.180650  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:26.772913  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:26.773363  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:26.773422  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:26.773483  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:26.810158  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:26.810181  224341 cri.go:89] found id: ""
	I1216 03:04:26.810188  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:26.810239  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.813907  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:26.813976  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:26.850152  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:26.850175  224341 cri.go:89] found id: ""
	I1216 03:04:26.850186  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:26.850240  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.855211  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:26.855284  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:26.893660  224341 cri.go:89] found id: ""
	I1216 03:04:26.893684  224341 logs.go:282] 0 containers: []
	W1216 03:04:26.893691  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:26.893697  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:26.893751  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:26.929607  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:26.929628  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:26.929632  224341 cri.go:89] found id: ""
	I1216 03:04:26.929639  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:26.929693  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.933638  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.937066  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:26.937125  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:26.972041  224341 cri.go:89] found id: ""
	I1216 03:04:26.972067  224341 logs.go:282] 0 containers: []
	W1216 03:04:26.972077  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:26.972085  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:26.972145  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:27.009502  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:27.009524  224341 cri.go:89] found id: ""
	I1216 03:04:27.009533  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:27.009589  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.013603  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:27.013658  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:27.052308  224341 cri.go:89] found id: ""
	I1216 03:04:27.052335  224341 logs.go:282] 0 containers: []
	W1216 03:04:27.052343  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:27.052348  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:27.052395  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:27.087498  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:27.087521  224341 cri.go:89] found id: ""
	I1216 03:04:27.087528  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:27.087584  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.091486  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:27.091506  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:27.135610  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:27.135637  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:27.171056  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:27.171085  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:27.208768  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:27.208798  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:27.247152  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:27.247180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:27.293047  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:27.293076  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:27.369089  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:27.369119  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:27.403846  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:27.403883  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:27.457484  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:27.457516  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:27.578805  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:27.578852  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:27.596358  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:27.596385  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:27.666798  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:23.268377  233647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.071269575s)
	W1216 03:04:23.268417  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1216 03:04:23.268425  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:23.268436  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:23.302985  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:23.303010  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:23.336718  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:23.336756  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:23.352642  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:23.352674  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:23.386654  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:23.386686  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:23.422065  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:23.422098  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.489890  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:23.489919  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:26.024897  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:27.486938  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46014->192.168.76.2:8443: read: connection reset by peer
	I1216 03:04:27.487013  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:27.487066  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:27.519814  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:27.519861  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:27.519867  233647 cri.go:89] found id: ""
	I1216 03:04:27.519876  233647 logs.go:282] 2 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:27.519933  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.524050  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.528435  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:27.528497  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:27.558960  233647 cri.go:89] found id: ""
	I1216 03:04:27.558988  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.559005  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:27.559013  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:27.559067  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:27.588065  233647 cri.go:89] found id: ""
	I1216 03:04:27.588093  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.588104  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:27.588113  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:27.588170  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:27.616575  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:27.616599  233647 cri.go:89] found id: ""
	I1216 03:04:27.616610  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:27.616666  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.620915  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:27.620992  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:27.648038  233647 cri.go:89] found id: ""
	I1216 03:04:27.648066  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.648078  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:27.648086  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:27.648141  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:27.678473  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:27.678490  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:27.678499  233647 cri.go:89] found id: ""
	I1216 03:04:27.678506  233647 logs.go:282] 2 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:27.678561  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.682702  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.686697  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:27.686763  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:27.712886  233647 cri.go:89] found id: ""
	I1216 03:04:27.712909  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.712917  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:27.712922  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:27.712980  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:27.739275  233647 cri.go:89] found id: ""
	I1216 03:04:27.739376  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.739416  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:27.739436  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:27.739499  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:27.806241  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:27.806270  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:27.837544  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:27.837575  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:27.857482  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:27.857520  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:27.914571  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:27.914592  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:27.914606  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:27.945517  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:27.945559  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:27.974105  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:27.974129  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:28.069246  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:28.069282  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	W1216 03:04:28.096034  233647 logs.go:130] failed kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:04:28.093580    6013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist" containerID="f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	time="2025-12-16T03:04:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1216 03:04:28.093580    6013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist" containerID="f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	time="2025-12-16T03:04:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist"
	
	** /stderr **
	I1216 03:04:28.096056  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:28.096070  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:28.121602  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:28.121628  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:25.361370  263091 addons.go:530] duration metric: took 736.392839ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:04:25.596297  263091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-073001" context rescaled to 1 replicas
	W1216 03:04:27.354776  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	I1216 03:04:27.472487  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:27.972086  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:28.472460  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:28.971539  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:29.042955  266278 kubeadm.go:1114] duration metric: took 5.155022486s to wait for elevateKubeSystemPrivileges
	I1216 03:04:29.043001  266278 kubeadm.go:403] duration metric: took 13.255332897s to StartCluster
	I1216 03:04:29.043025  266278 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:29.043093  266278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:04:29.044782  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:29.045043  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:04:29.045072  266278 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:04:29.045131  266278 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:04:29.045280  266278 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307185"
	I1216 03:04:29.045293  266278 addons.go:70] Setting default-storageclass=true in profile "no-preload-307185"
	I1216 03:04:29.045303  266278 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307185"
	I1216 03:04:29.045320  266278 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307185"
	I1216 03:04:29.045332  266278 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:04:29.045340  266278 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:04:29.045781  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.046084  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.046810  266278 out.go:179] * Verifying Kubernetes components...
	I1216 03:04:29.050310  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:29.072939  266278 addons.go:239] Setting addon default-storageclass=true in "no-preload-307185"
	I1216 03:04:29.072986  266278 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:04:29.073528  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.076926  266278 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:29.077992  266278 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:29.078013  266278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:04:29.078085  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:29.100707  266278 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:29.100732  266278 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:04:29.100792  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:29.110463  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:29.127812  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:29.141913  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:04:29.206746  266278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:29.228149  266278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:29.238770  266278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:29.318526  266278 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:04:29.319557  266278 node_ready.go:35] waiting up to 6m0s for node "no-preload-307185" to be "Ready" ...
	I1216 03:04:29.596065  266278 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:04:29.596998  266278 addons.go:530] duration metric: took 551.868633ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:04:29.824054  266278 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-307185" context rescaled to 1 replicas
	W1216 03:04:31.323018  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:30.167261  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:30.167686  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:30.167751  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:30.167832  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:30.217161  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:30.217185  224341 cri.go:89] found id: ""
	I1216 03:04:30.217202  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:30.217257  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.221972  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:30.222039  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:30.260182  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:30.260200  224341 cri.go:89] found id: ""
	I1216 03:04:30.260207  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:30.260256  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.264295  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:30.264365  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:30.302968  224341 cri.go:89] found id: ""
	I1216 03:04:30.302994  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.303005  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:30.303012  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:30.303071  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:30.350482  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:30.350507  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:30.350514  224341 cri.go:89] found id: ""
	I1216 03:04:30.350524  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:30.350588  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.355582  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.360407  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:30.360479  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:30.408075  224341 cri.go:89] found id: ""
	I1216 03:04:30.408101  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.408112  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:30.408119  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:30.408179  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:30.456434  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:30.456461  224341 cri.go:89] found id: ""
	I1216 03:04:30.456472  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:30.456531  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.461724  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:30.461796  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:30.507639  224341 cri.go:89] found id: ""
	I1216 03:04:30.507665  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.507675  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:30.507682  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:30.507743  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:30.557877  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:30.557906  224341 cri.go:89] found id: ""
	I1216 03:04:30.557916  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:30.557979  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.563070  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:30.563094  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:30.584259  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:30.584370  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:30.660261  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:30.660285  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:30.660304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:30.704598  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:30.704625  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:30.755793  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:30.755840  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:30.795341  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:30.795367  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:30.859451  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:30.859483  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:30.902740  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:30.902772  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:31.009266  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:31.009304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:31.061285  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:31.061317  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:31.146467  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:31.146492  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:30.648909  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:30.649359  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:30.649425  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:30.649489  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:30.680129  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:30.680156  233647 cri.go:89] found id: ""
	I1216 03:04:30.680166  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:30.680277  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.684176  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:30.684242  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:30.714636  233647 cri.go:89] found id: ""
	I1216 03:04:30.714663  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.714674  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:30.714680  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:30.714724  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:30.744320  233647 cri.go:89] found id: ""
	I1216 03:04:30.744346  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.744357  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:30.744365  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:30.744411  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:30.775597  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:30.775618  233647 cri.go:89] found id: ""
	I1216 03:04:30.775628  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:30.775688  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.779894  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:30.779991  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:30.810482  233647 cri.go:89] found id: ""
	I1216 03:04:30.810505  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.810514  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:30.810520  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:30.810566  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:30.839730  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:30.839749  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:30.839753  233647 cri.go:89] found id: ""
	I1216 03:04:30.839761  233647 logs.go:282] 2 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:30.839833  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.843942  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.847643  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:30.847697  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:30.879330  233647 cri.go:89] found id: ""
	I1216 03:04:30.879359  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.879370  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:30.879378  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:30.879461  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:30.909702  233647 cri.go:89] found id: ""
	I1216 03:04:30.909727  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.909737  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:30.909750  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:30.909760  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:30.993496  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:30.993532  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:31.052325  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:31.052342  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:31.052353  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:31.081434  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:31.081468  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:31.112091  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:31.112115  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:31.143865  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:31.143896  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:31.158929  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:31.158956  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:31.192495  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:31.192521  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:31.219539  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:31.219563  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1216 03:04:29.356808  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:31.855954  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:33.822689  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	W1216 03:04:35.822902  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:33.683731  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:33.684182  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:33.684245  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:33.684313  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:33.719638  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:33.719660  224341 cri.go:89] found id: ""
	I1216 03:04:33.719668  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:33.719732  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.723564  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:33.723623  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:33.756396  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:33.756418  224341 cri.go:89] found id: ""
	I1216 03:04:33.756427  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:33.756485  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.760193  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:33.760241  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:33.794948  224341 cri.go:89] found id: ""
	I1216 03:04:33.794973  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.794983  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:33.794990  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:33.795054  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:33.831869  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:33.831888  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:33.831894  224341 cri.go:89] found id: ""
	I1216 03:04:33.831903  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:33.831966  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.836217  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.840689  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:33.840754  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:33.882263  224341 cri.go:89] found id: ""
	I1216 03:04:33.882287  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.882299  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:33.882306  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:33.882369  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:33.919801  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:33.919834  224341 cri.go:89] found id: ""
	I1216 03:04:33.919845  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:33.919912  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.923626  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:33.923676  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:33.960911  224341 cri.go:89] found id: ""
	I1216 03:04:33.960939  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.960950  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:33.960958  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:33.961020  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:33.999211  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:33.999231  224341 cri.go:89] found id: ""
	I1216 03:04:33.999240  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:33.999335  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:34.003231  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:34.003252  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:34.063694  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:34.063732  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:34.168760  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:34.168798  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:34.187537  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:34.187567  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:34.261810  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:34.261842  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:34.261857  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:34.301375  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:34.301404  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:34.351962  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:34.351999  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:34.402382  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:34.402409  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:34.440734  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:34.440757  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:34.515640  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:34.515672  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:34.553729  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:34.553757  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.089427  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:37.089839  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:37.089910  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:37.089965  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:37.128978  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:37.129000  224341 cri.go:89] found id: ""
	I1216 03:04:37.129010  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:37.129064  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.133375  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:37.133446  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:37.174287  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:37.174313  224341 cri.go:89] found id: ""
	I1216 03:04:37.174323  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:37.174370  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.178662  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:37.178733  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:37.221546  224341 cri.go:89] found id: ""
	I1216 03:04:37.221567  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.221574  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:37.221579  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:37.221624  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:37.256908  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:37.256931  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:37.256940  224341 cri.go:89] found id: ""
	I1216 03:04:37.256951  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:37.257012  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.260770  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.264215  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:37.264273  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:37.308112  224341 cri.go:89] found id: ""
	I1216 03:04:37.308146  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.308158  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:37.308168  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:37.308291  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:37.355291  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:37.355314  224341 cri.go:89] found id: ""
	I1216 03:04:37.355324  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:37.355381  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.361033  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:37.361143  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:37.400370  224341 cri.go:89] found id: ""
	I1216 03:04:37.400393  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.400402  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:37.400410  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:37.400469  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:37.436795  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.436828  224341 cri.go:89] found id: ""
	I1216 03:04:37.436839  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:37.436893  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.440984  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:37.441004  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:37.480346  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:37.480374  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:37.563172  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:37.563207  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:37.607766  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:37.607793  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.645038  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:37.645062  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:37.706961  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:37.706993  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:37.724013  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:37.724039  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:37.772027  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:37.772057  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:37.806697  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:37.806721  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:37.845698  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:37.845730  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:33.778977  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:33.779389  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:33.779443  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:33.779503  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:33.809015  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:33.809039  233647 cri.go:89] found id: ""
	I1216 03:04:33.809050  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:33.809108  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.813147  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:33.813220  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:33.843689  233647 cri.go:89] found id: ""
	I1216 03:04:33.843712  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.843720  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:33.843726  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:33.843766  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:33.874922  233647 cri.go:89] found id: ""
	I1216 03:04:33.874950  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.874962  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:33.874969  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:33.875030  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:33.904575  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:33.904598  233647 cri.go:89] found id: ""
	I1216 03:04:33.904606  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:33.904665  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.909588  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:33.909656  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:33.937449  233647 cri.go:89] found id: ""
	I1216 03:04:33.937474  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.937484  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:33.937491  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:33.937558  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:33.965216  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:33.965240  233647 cri.go:89] found id: ""
	I1216 03:04:33.965251  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:33.965313  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.969212  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:33.969265  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:33.997607  233647 cri.go:89] found id: ""
	I1216 03:04:33.997633  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.997642  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:33.997648  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:33.997693  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:34.027141  233647 cri.go:89] found id: ""
	I1216 03:04:34.027168  233647 logs.go:282] 0 containers: []
	W1216 03:04:34.027178  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:34.027187  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:34.027203  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:34.054148  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:34.054178  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:34.083001  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:34.083029  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:34.144728  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:34.144779  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:34.179844  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:34.179879  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:34.287130  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:34.287162  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:34.304086  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:34.304118  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:34.363856  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:34.363905  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:34.363922  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:36.899991  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:36.900396  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:36.900450  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:36.900512  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:36.928846  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:36.928867  233647 cri.go:89] found id: ""
	I1216 03:04:36.928876  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:36.928933  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:36.932763  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:36.932812  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:36.961114  233647 cri.go:89] found id: ""
	I1216 03:04:36.961142  233647 logs.go:282] 0 containers: []
	W1216 03:04:36.961154  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:36.961161  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:36.961230  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:36.992750  233647 cri.go:89] found id: ""
	I1216 03:04:36.992771  233647 logs.go:282] 0 containers: []
	W1216 03:04:36.992780  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:36.992786  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:36.992854  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:37.020564  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:37.020586  233647 cri.go:89] found id: ""
	I1216 03:04:37.020594  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:37.020648  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.024746  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:37.024802  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:37.051149  233647 cri.go:89] found id: ""
	I1216 03:04:37.051170  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.051178  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:37.051186  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:37.051230  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:37.077572  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:37.077591  233647 cri.go:89] found id: ""
	I1216 03:04:37.077598  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:37.077651  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.081489  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:37.081539  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:37.110429  233647 cri.go:89] found id: ""
	I1216 03:04:37.110459  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.110473  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:37.110480  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:37.110533  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:37.140366  233647 cri.go:89] found id: ""
	I1216 03:04:37.140391  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.140403  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:37.140414  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:37.140428  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:37.239331  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:37.239370  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:37.255575  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:37.255602  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:37.326926  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:37.326951  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:37.326968  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:37.370783  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:37.370808  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:37.398896  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:37.398925  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:37.425339  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:37.425363  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:37.489086  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:37.489120  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 03:04:34.355384  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:36.355669  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	I1216 03:04:37.356434  263091 node_ready.go:49] node "old-k8s-version-073001" is "Ready"
	I1216 03:04:37.356462  263091 node_ready.go:38] duration metric: took 12.004333871s for node "old-k8s-version-073001" to be "Ready" ...
	I1216 03:04:37.356480  263091 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:04:37.356528  263091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:04:37.370849  263091 api_server.go:72] duration metric: took 12.745793596s to wait for apiserver process to appear ...
	I1216 03:04:37.370869  263091 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:04:37.370897  263091 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 03:04:37.376057  263091 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 03:04:37.377242  263091 api_server.go:141] control plane version: v1.28.0
	I1216 03:04:37.377269  263091 api_server.go:131] duration metric: took 6.391967ms to wait for apiserver health ...
	I1216 03:04:37.377278  263091 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:04:37.381006  263091 system_pods.go:59] 8 kube-system pods found
	I1216 03:04:37.381043  263091 system_pods.go:61] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.381052  263091 system_pods.go:61] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.381060  263091 system_pods.go:61] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.381066  263091 system_pods.go:61] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.381071  263091 system_pods.go:61] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.381080  263091 system_pods.go:61] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.381086  263091 system_pods.go:61] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.381093  263091 system_pods.go:61] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.381108  263091 system_pods.go:74] duration metric: took 3.822929ms to wait for pod list to return data ...
	I1216 03:04:37.381122  263091 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:04:37.383128  263091 default_sa.go:45] found service account: "default"
	I1216 03:04:37.383147  263091 default_sa.go:55] duration metric: took 2.018975ms for default service account to be created ...
	I1216 03:04:37.383157  263091 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:04:37.387599  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:37.387632  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.387640  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.387648  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.387653  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.387659  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.387665  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.387671  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.387682  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.387717  263091 retry.go:31] will retry after 235.616732ms: missing components: kube-dns
	I1216 03:04:37.628167  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:37.628204  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.628212  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.628220  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.628226  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.628232  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.628237  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.628242  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.628251  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.628270  263091 retry.go:31] will retry after 382.482522ms: missing components: kube-dns
	I1216 03:04:38.015537  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:38.015569  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Running
	I1216 03:04:38.015579  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:38.015585  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:38.015590  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:38.015596  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:38.015601  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:38.015606  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:38.015611  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Running
	I1216 03:04:38.015620  263091 system_pods.go:126] duration metric: took 632.456255ms to wait for k8s-apps to be running ...
	I1216 03:04:38.015633  263091 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:04:38.015681  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:38.029077  263091 system_svc.go:56] duration metric: took 13.436289ms WaitForService to wait for kubelet
	I1216 03:04:38.029102  263091 kubeadm.go:587] duration metric: took 13.404051181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:04:38.029124  263091 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:04:38.031756  263091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:04:38.031785  263091 node_conditions.go:123] node cpu capacity is 8
	I1216 03:04:38.031805  263091 node_conditions.go:105] duration metric: took 2.675128ms to run NodePressure ...
	I1216 03:04:38.031832  263091 start.go:242] waiting for startup goroutines ...
	I1216 03:04:38.031841  263091 start.go:247] waiting for cluster config update ...
	I1216 03:04:38.031857  263091 start.go:256] writing updated cluster config ...
	I1216 03:04:38.032283  263091 ssh_runner.go:195] Run: rm -f paused
	I1216 03:04:38.035943  263091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:38.040092  263091 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8lk58" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.044415  263091 pod_ready.go:94] pod "coredns-5dd5756b68-8lk58" is "Ready"
	I1216 03:04:38.044438  263091 pod_ready.go:86] duration metric: took 4.325397ms for pod "coredns-5dd5756b68-8lk58" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.047013  263091 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.050901  263091 pod_ready.go:94] pod "etcd-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.050918  263091 pod_ready.go:86] duration metric: took 3.888416ms for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.053525  263091 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.057315  263091 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.057336  263091 pod_ready.go:86] duration metric: took 3.793165ms for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.059556  263091 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.440942  263091 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.440971  263091 pod_ready.go:86] duration metric: took 381.398224ms for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.640793  263091 pod_ready.go:83] waiting for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.040807  263091 pod_ready.go:94] pod "kube-proxy-mhxd9" is "Ready"
	I1216 03:04:39.040870  263091 pod_ready.go:86] duration metric: took 400.044513ms for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.241603  263091 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.640048  263091 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-073001" is "Ready"
	I1216 03:04:39.640074  263091 pod_ready.go:86] duration metric: took 398.449646ms for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.640084  263091 pod_ready.go:40] duration metric: took 1.604105384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:39.685502  263091 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 03:04:39.687002  263091 out.go:203] 
	W1216 03:04:39.688223  263091 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 03:04:39.689409  263091 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 03:04:39.690775  263091 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-073001" cluster and "default" namespace by default
	W1216 03:04:38.322668  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	W1216 03:04:40.822769  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:41.823191  266278 node_ready.go:49] node "no-preload-307185" is "Ready"
	I1216 03:04:41.823216  266278 node_ready.go:38] duration metric: took 12.503636541s for node "no-preload-307185" to be "Ready" ...
	I1216 03:04:41.823229  266278 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:04:41.823284  266278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:04:41.835480  266278 api_server.go:72] duration metric: took 12.790371447s to wait for apiserver process to appear ...
	I1216 03:04:41.835503  266278 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:04:41.835523  266278 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:04:41.839474  266278 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:04:41.840373  266278 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 03:04:41.840394  266278 api_server.go:131] duration metric: took 4.885221ms to wait for apiserver health ...
	I1216 03:04:41.840401  266278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:04:41.843381  266278 system_pods.go:59] 8 kube-system pods found
	I1216 03:04:41.843409  266278 system_pods.go:61] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:41.843414  266278 system_pods.go:61] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:41.843420  266278 system_pods.go:61] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:41.843424  266278 system_pods.go:61] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:41.843430  266278 system_pods.go:61] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:41.843433  266278 system_pods.go:61] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:41.843436  266278 system_pods.go:61] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:41.843441  266278 system_pods.go:61] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:41.843448  266278 system_pods.go:74] duration metric: took 3.043015ms to wait for pod list to return data ...
	I1216 03:04:41.843457  266278 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:04:41.845934  266278 default_sa.go:45] found service account: "default"
	I1216 03:04:41.845952  266278 default_sa.go:55] duration metric: took 2.489686ms for default service account to be created ...
	I1216 03:04:41.845960  266278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:04:41.848539  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:41.848569  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:41.848578  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:41.848586  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:41.848592  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:41.848601  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:41.848608  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:41.848617  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:41.848626  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:41.848650  266278 retry.go:31] will retry after 282.258049ms: missing components: kube-dns
	I1216 03:04:37.947984  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:37.948015  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:38.009727  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:40.510140  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:40.510568  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:40.510633  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:40.510696  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:40.547520  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:40.547546  224341 cri.go:89] found id: ""
	I1216 03:04:40.547555  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:40.547609  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.551567  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:40.551622  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:40.593028  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:40.593052  224341 cri.go:89] found id: ""
	I1216 03:04:40.593063  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:40.593124  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.597202  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:40.597253  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:40.633469  224341 cri.go:89] found id: ""
	I1216 03:04:40.633498  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.633509  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:40.633518  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:40.633577  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:40.669134  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:40.669169  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:40.669173  224341 cri.go:89] found id: ""
	I1216 03:04:40.669180  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:40.669233  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.673156  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.676673  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:40.676724  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:40.711538  224341 cri.go:89] found id: ""
	I1216 03:04:40.711564  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.711572  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:40.711578  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:40.711627  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:40.747041  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:40.747061  224341 cri.go:89] found id: ""
	I1216 03:04:40.747068  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:40.747132  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.751110  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:40.751180  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:40.786123  224341 cri.go:89] found id: ""
	I1216 03:04:40.786156  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.786167  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:40.786175  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:40.786228  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:40.821410  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:40.821434  224341 cri.go:89] found id: ""
	I1216 03:04:40.821445  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:40.821502  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.825419  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:40.825442  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:40.861223  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:40.861254  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:40.896085  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:40.896113  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:40.952317  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:40.952347  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:41.054168  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:41.054199  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:41.115942  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:41.115963  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:41.115978  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:41.156998  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:41.157028  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:41.196355  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:41.196386  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:41.213363  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:41.213394  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:41.261615  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:41.261652  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:41.348753  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:41.348782  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:40.025584  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:40.026067  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:40.026114  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:40.026164  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:40.054288  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:40.054307  233647 cri.go:89] found id: ""
	I1216 03:04:40.054316  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:40.054366  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.058203  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:40.058257  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:40.083760  233647 cri.go:89] found id: ""
	I1216 03:04:40.083784  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.083795  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:40.083803  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:40.083898  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:40.110529  233647 cri.go:89] found id: ""
	I1216 03:04:40.110556  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.110574  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:40.110583  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:40.110647  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:40.136367  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:40.136395  233647 cri.go:89] found id: ""
	I1216 03:04:40.136406  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:40.136463  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.140559  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:40.140621  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:40.169045  233647 cri.go:89] found id: ""
	I1216 03:04:40.169075  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.169091  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:40.169099  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:40.169160  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:40.200419  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:40.200443  233647 cri.go:89] found id: ""
	I1216 03:04:40.200452  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:40.200506  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.205230  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:40.205288  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:40.231274  233647 cri.go:89] found id: ""
	I1216 03:04:40.231295  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.231304  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:40.231311  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:40.231367  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:40.260341  233647 cri.go:89] found id: ""
	I1216 03:04:40.260361  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.260369  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:40.260377  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:40.260391  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:40.286085  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:40.286111  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:40.312754  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:40.312782  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:40.371864  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:40.371894  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:40.402082  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:40.402112  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:40.485537  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:40.485568  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:40.500011  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:40.500039  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:40.561081  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:40.561108  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:40.561123  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.095990  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:43.096401  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:43.096454  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:43.096500  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:43.123846  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.123866  233647 cri.go:89] found id: ""
	I1216 03:04:43.123873  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:43.123935  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.127889  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:43.127956  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:42.135077  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.135109  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.135114  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.135120  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.135124  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.135128  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.135133  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.135136  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.135140  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.135155  266278 retry.go:31] will retry after 313.389916ms: missing components: kube-dns
	I1216 03:04:42.452939  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.452972  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.452980  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.452988  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.452992  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.452998  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.453003  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.453009  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.453016  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.453036  266278 retry.go:31] will retry after 359.676321ms: missing components: kube-dns
	I1216 03:04:42.816812  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.816871  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.816877  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.816883  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.816886  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.816891  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.816894  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.816904  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.816909  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.816923  266278 retry.go:31] will retry after 371.549417ms: missing components: kube-dns
	I1216 03:04:43.192959  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:43.192993  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Running
	I1216 03:04:43.192999  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:43.193003  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:43.193007  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:43.193011  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:43.193014  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:43.193017  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:43.193020  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Running
	I1216 03:04:43.193029  266278 system_pods.go:126] duration metric: took 1.34706308s to wait for k8s-apps to be running ...
	I1216 03:04:43.193038  266278 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:04:43.193086  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:43.208031  266278 system_svc.go:56] duration metric: took 14.983786ms WaitForService to wait for kubelet
	I1216 03:04:43.208063  266278 kubeadm.go:587] duration metric: took 14.16295763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:04:43.208088  266278 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:04:43.211118  266278 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:04:43.211150  266278 node_conditions.go:123] node cpu capacity is 8
	I1216 03:04:43.211170  266278 node_conditions.go:105] duration metric: took 3.075802ms to run NodePressure ...
	I1216 03:04:43.211183  266278 start.go:242] waiting for startup goroutines ...
	I1216 03:04:43.211196  266278 start.go:247] waiting for cluster config update ...
	I1216 03:04:43.211222  266278 start.go:256] writing updated cluster config ...
	I1216 03:04:43.211503  266278 ssh_runner.go:195] Run: rm -f paused
	I1216 03:04:43.215638  266278 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:43.219346  266278 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nm9bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.223657  266278 pod_ready.go:94] pod "coredns-7d764666f9-nm9bc" is "Ready"
	I1216 03:04:43.223677  266278 pod_ready.go:86] duration metric: took 4.310872ms for pod "coredns-7d764666f9-nm9bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.225765  266278 pod_ready.go:83] waiting for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.229783  266278 pod_ready.go:94] pod "etcd-no-preload-307185" is "Ready"
	I1216 03:04:43.229810  266278 pod_ready.go:86] duration metric: took 4.024063ms for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.293783  266278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.298594  266278 pod_ready.go:94] pod "kube-apiserver-no-preload-307185" is "Ready"
	I1216 03:04:43.298621  266278 pod_ready.go:86] duration metric: took 4.808ms for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.300770  266278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.619624  266278 pod_ready.go:94] pod "kube-controller-manager-no-preload-307185" is "Ready"
	I1216 03:04:43.619645  266278 pod_ready.go:86] duration metric: took 318.853802ms for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.819653  266278 pod_ready.go:83] waiting for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.220236  266278 pod_ready.go:94] pod "kube-proxy-tp2h2" is "Ready"
	I1216 03:04:44.220266  266278 pod_ready.go:86] duration metric: took 400.587068ms for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.420794  266278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.820532  266278 pod_ready.go:94] pod "kube-scheduler-no-preload-307185" is "Ready"
	I1216 03:04:44.820559  266278 pod_ready.go:86] duration metric: took 399.731672ms for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.820570  266278 pod_ready.go:40] duration metric: took 1.604895974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:44.871646  266278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:04:44.876299  266278 out.go:179] * Done! kubectl is now configured to use "no-preload-307185" cluster and "default" namespace by default
	I1216 03:04:43.904075  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:43.904484  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:43.904539  224341 kubeadm.go:602] duration metric: took 4m14.640353632s to restartPrimaryControlPlane
	W1216 03:04:43.904587  224341 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:04:43.904641  224341 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 03:04:44.619626  224341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:44.631601  224341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:44.641269  224341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:44.641332  224341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:44.650573  224341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:44.650596  224341 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:44.650645  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:44.660301  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:44.660364  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:44.669651  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:44.678800  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:44.678879  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:44.688010  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:44.697239  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:44.697310  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:44.705917  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:44.715673  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:44.715726  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:44.724368  224341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:44.779778  224341 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:44.839245  224341 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:43.154733  233647 cri.go:89] found id: ""
	I1216 03:04:43.154752  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.154759  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:43.154764  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:43.154807  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:43.182338  233647 cri.go:89] found id: ""
	I1216 03:04:43.182362  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.182372  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:43.182379  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:43.182436  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:43.211125  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:43.211146  233647 cri.go:89] found id: ""
	I1216 03:04:43.211166  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:43.211219  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.215454  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:43.215518  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:43.245421  233647 cri.go:89] found id: ""
	I1216 03:04:43.245445  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.245454  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:43.245460  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:43.245508  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:43.271711  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:43.271730  233647 cri.go:89] found id: ""
	I1216 03:04:43.271736  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:43.271785  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.275656  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:43.275720  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:43.304227  233647 cri.go:89] found id: ""
	I1216 03:04:43.304248  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.304257  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:43.304262  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:43.304327  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:43.331002  233647 cri.go:89] found id: ""
	I1216 03:04:43.331029  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.331041  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:43.331052  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:43.331073  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:43.345955  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:43.345984  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:43.402576  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:43.402598  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:43.402612  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.432899  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:43.432926  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:43.460428  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:43.460457  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:43.486603  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:43.486625  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:43.543843  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:43.543874  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:43.573462  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:43.573493  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:46.156154  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:46.156579  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:46.156642  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:46.156706  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:46.183671  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:46.183702  233647 cri.go:89] found id: ""
	I1216 03:04:46.183713  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:46.183772  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.188151  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:46.188208  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:46.217415  233647 cri.go:89] found id: ""
	I1216 03:04:46.217437  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.217448  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:46.217454  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:46.217511  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:46.244563  233647 cri.go:89] found id: ""
	I1216 03:04:46.244589  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.244596  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:46.244602  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:46.244656  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:46.271475  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:46.271498  233647 cri.go:89] found id: ""
	I1216 03:04:46.271508  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:46.271560  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.275440  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:46.275502  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:46.303741  233647 cri.go:89] found id: ""
	I1216 03:04:46.303763  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.303772  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:46.303779  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:46.303858  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:46.332440  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:46.332459  233647 cri.go:89] found id: ""
	I1216 03:04:46.332468  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:46.332524  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.336438  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:46.336493  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:46.364557  233647 cri.go:89] found id: ""
	I1216 03:04:46.364585  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.364597  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:46.364605  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:46.364661  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:46.391606  233647 cri.go:89] found id: ""
	I1216 03:04:46.391634  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.391643  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:46.391652  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:46.391662  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:46.448671  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:46.448702  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:46.448719  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:46.479787  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:46.479831  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:46.507184  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:46.507209  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:46.537405  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:46.537432  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:46.608985  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:46.609016  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:46.644455  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:46.644484  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:46.745339  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:46.745369  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Dec 16 03:04:37 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:37.323395553Z" level=info msg="Starting container: 2f6ee6464e76411a3bab9021bf70715bfbb13a30ad8a9875c5e0fb877e7e59e9" id=f7d49d3b-e2d1-4b05-a218-d34780db8c3d name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:04:37 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:37.326098875Z" level=info msg="Started container" PID=2176 containerID=2f6ee6464e76411a3bab9021bf70715bfbb13a30ad8a9875c5e0fb877e7e59e9 description=kube-system/coredns-5dd5756b68-8lk58/coredns id=f7d49d3b-e2d1-4b05-a218-d34780db8c3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=419ec8833b668def381fbb58643d07258b4d46111348746a07c27b69c221e705
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.15950748Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5305baab-dc55-4483-9e8f-c27edb496bcf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.159588632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.165528198Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e055ebd833715e7a8d4fa61e5de8f6a8510412ee977edea3d7d62afb0c4bd61a UID:68715bfa-1969-4519-9966-8409fc51c09f NetNS:/var/run/netns/d6e9e1f0-cc38-4c5b-bf6b-e7f4af607fc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9e0}] Aliases:map[]}"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.165567624Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.1766896Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e055ebd833715e7a8d4fa61e5de8f6a8510412ee977edea3d7d62afb0c4bd61a UID:68715bfa-1969-4519-9966-8409fc51c09f NetNS:/var/run/netns/d6e9e1f0-cc38-4c5b-bf6b-e7f4af607fc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9e0}] Aliases:map[]}"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.176903696Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.17777104Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.178670118Z" level=info msg="Ran pod sandbox e055ebd833715e7a8d4fa61e5de8f6a8510412ee977edea3d7d62afb0c4bd61a with infra container: default/busybox/POD" id=5305baab-dc55-4483-9e8f-c27edb496bcf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.180006713Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d456af5f-91f5-4555-8081-1ad71bbcfd87 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.18018974Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d456af5f-91f5-4555-8081-1ad71bbcfd87 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.180258616Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d456af5f-91f5-4555-8081-1ad71bbcfd87 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.180811663Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df048c94-2f7c-471e-ab7a-2d94f706c80d name=/runtime.v1.ImageService/PullImage
	Dec 16 03:04:40 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:40.182477717Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.486738158Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=df048c94-2f7c-471e-ab7a-2d94f706c80d name=/runtime.v1.ImageService/PullImage
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.487679825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e0924432-8b00-41ca-9e82-e275cffb7343 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.489207685Z" level=info msg="Creating container: default/busybox/busybox" id=2bfa2936-8f3b-4132-9400-9129b9646e80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.489339027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.493526527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.493932821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.522488143Z" level=info msg="Created container 0de2b24cbf9249d98a231c08d11986f05f3762e7e81c2dd2c953f6458684a335: default/busybox/busybox" id=2bfa2936-8f3b-4132-9400-9129b9646e80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.523065398Z" level=info msg="Starting container: 0de2b24cbf9249d98a231c08d11986f05f3762e7e81c2dd2c953f6458684a335" id=d457b11b-d9b4-4819-93f1-58829c85fe2d name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:04:41 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:41.524740447Z" level=info msg="Started container" PID=2253 containerID=0de2b24cbf9249d98a231c08d11986f05f3762e7e81c2dd2c953f6458684a335 description=default/busybox/busybox id=d457b11b-d9b4-4819-93f1-58829c85fe2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e055ebd833715e7a8d4fa61e5de8f6a8510412ee977edea3d7d62afb0c4bd61a
	Dec 16 03:04:47 old-k8s-version-073001 crio[782]: time="2025-12-16T03:04:47.96307236Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	0de2b24cbf924       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   e055ebd833715       busybox                                          default
	2f6ee6464e764       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   419ec8833b668       coredns-5dd5756b68-8lk58                         kube-system
	94a56f7cb9506       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   8c8240be9648a       storage-provisioner                              kube-system
	d3d6feb9567d5       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   bb620230a826f       kindnet-8qgxg                                    kube-system
	639b881a8a34f       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   5d44d2db8b85d       kube-proxy-mhxd9                                 kube-system
	875e072b1e127       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   1e0f4ad01bde2       etcd-old-k8s-version-073001                      kube-system
	5f7b23aafeec7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   faf057e0bf230       kube-scheduler-old-k8s-version-073001            kube-system
	bce94adfb51a9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   8ad676cd03ee4       kube-controller-manager-old-k8s-version-073001   kube-system
	73a78987cab13       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   f92d74cdb01d8       kube-apiserver-old-k8s-version-073001            kube-system
	
	
	==> coredns [2f6ee6464e76411a3bab9021bf70715bfbb13a30ad8a9875c5e0fb877e7e59e9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41189 - 37581 "HINFO IN 4954949532891035467.8337446157646273235. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048408803s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-073001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-073001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=old-k8s-version-073001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-073001
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:04:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:04:41 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:04:41 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:04:41 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:04:41 +0000   Tue, 16 Dec 2025 03:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-073001
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                4d9d8feb-d0ea-4431-92a9-9a047ec2b103
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-8lk58                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-073001                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-8qgxg                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-073001             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-073001    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-mhxd9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-073001             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-073001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-073001 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [875e072b1e12753104d5518a8f0b37670ab27d9391a2b37facf099339e326546] <==
	{"level":"info","ts":"2025-12-16T03:04:05.713146Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-16T03:04:05.71568Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T03:04:05.715877Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:04:05.715929Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:04:05.716202Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T03:04:05.716323Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T03:04:06.198935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-16T03:04:06.199089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-16T03:04:06.199128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-16T03:04:06.199149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-16T03:04:06.199158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-16T03:04:06.19917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-16T03:04:06.19918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-16T03:04:06.200184Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:04:06.200765Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-073001 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T03:04:06.200875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:04:06.201067Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:04:06.200965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:04:06.202501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-16T03:04:06.202521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-16T03:04:06.202541Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:04:06.202572Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:04:06.20262Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T03:04:06.20263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T03:04:21.172642Z","caller":"traceutil/trace.go:171","msg":"trace[128921541] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"114.921682ms","start":"2025-12-16T03:04:21.057689Z","end":"2025-12-16T03:04:21.17261Z","steps":["trace[128921541] 'process raft request'  (duration: 49.854602ms)","trace[128921541] 'compare'  (duration: 64.872096ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:04:49 up 47 min,  0 user,  load average: 3.16, 2.41, 1.68
	Linux old-k8s-version-073001 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3d6feb9567d5dd7ad1ac4becf73cc7f4551bd20d5389afec9f2c72368e7f057] <==
	I1216 03:04:26.173102       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:04:26.173398       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:04:26.173548       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:04:26.173565       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:04:26.173592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:04:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:04:26.376708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:04:26.376727       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:04:26.376739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:04:26.472523       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:04:26.776922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:04:26.776967       1 metrics.go:72] Registering metrics
	I1216 03:04:26.777060       1 controller.go:711] "Syncing nftables rules"
	I1216 03:04:36.377278       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:04:36.377328       1 main.go:301] handling current node
	I1216 03:04:46.379896       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:04:46.379939       1 main.go:301] handling current node
	
	
	==> kube-apiserver [73a78987cab138b7914c34ca7c7b2a11f0f33ba62265a737dbe5ccc2061e9d8c] <==
	I1216 03:04:07.476765       1 shared_informer.go:318] Caches are synced for configmaps
	I1216 03:04:07.476777       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 03:04:07.477182       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 03:04:07.477216       1 aggregator.go:166] initial CRD sync complete...
	I1216 03:04:07.477225       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 03:04:07.477232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:04:07.477239       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:04:07.477884       1 controller.go:624] quota admission added evaluator for: namespaces
	E1216 03:04:07.480475       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1216 03:04:07.683420       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:04:08.383833       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 03:04:08.387615       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 03:04:08.387628       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:04:08.795196       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:04:08.833382       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:04:08.882315       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 03:04:08.888385       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1216 03:04:08.889781       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 03:04:08.895629       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:04:09.439506       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 03:04:10.486341       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 03:04:10.497898       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 03:04:10.507002       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1216 03:04:24.063550       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1216 03:04:24.213758       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bce94adfb51a9f2bccda3e8d0149c46afca53978dfe3edaa0359ee41ba4ceafe] <==
	I1216 03:04:23.711147       1 event.go:307] "Event occurred" object="old-k8s-version-073001" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller"
	I1216 03:04:23.718864       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-073001" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1216 03:04:24.038673       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:04:24.073338       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8qgxg"
	I1216 03:04:24.075680       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mhxd9"
	I1216 03:04:24.109911       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:04:24.110406       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 03:04:24.217890       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1216 03:04:24.519143       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-x4gcq"
	I1216 03:04:24.525878       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8lk58"
	I1216 03:04:24.533959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="316.252041ms"
	I1216 03:04:24.542035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.99947ms"
	I1216 03:04:24.542166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.765µs"
	I1216 03:04:24.544792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.183µs"
	I1216 03:04:25.125116       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1216 03:04:25.144392       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-x4gcq"
	I1216 03:04:25.172224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.919283ms"
	I1216 03:04:25.180126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.850784ms"
	I1216 03:04:25.180264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.124µs"
	I1216 03:04:36.969015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="156.303µs"
	I1216 03:04:36.987860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="175.17µs"
	I1216 03:04:37.647973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.23µs"
	I1216 03:04:37.677203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.340597ms"
	I1216 03:04:37.677303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.645µs"
	I1216 03:04:38.713082       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [639b881a8a34fa1d26ba775deb63de4803854ba06d29f4dac3567751800c55b8] <==
	I1216 03:04:24.489014       1 server_others.go:69] "Using iptables proxy"
	I1216 03:04:24.498646       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1216 03:04:24.520428       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:04:24.523683       1 server_others.go:152] "Using iptables Proxier"
	I1216 03:04:24.523723       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 03:04:24.523733       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 03:04:24.523777       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 03:04:24.524105       1 server.go:846] "Version info" version="v1.28.0"
	I1216 03:04:24.524165       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:04:24.526342       1 config.go:188] "Starting service config controller"
	I1216 03:04:24.526377       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 03:04:24.526405       1 config.go:97] "Starting endpoint slice config controller"
	I1216 03:04:24.526413       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 03:04:24.526537       1 config.go:315] "Starting node config controller"
	I1216 03:04:24.526592       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 03:04:24.626905       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1216 03:04:24.626955       1 shared_informer.go:318] Caches are synced for service config
	I1216 03:04:24.626976       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5f7b23aafeec73f05345ca1384cdc0da67a41252ec579c9ba0d6f33a363b14bc] <==
	W1216 03:04:07.451811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1216 03:04:07.451853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 03:04:07.451861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 03:04:07.451869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1216 03:04:07.451902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 03:04:07.451933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 03:04:07.451965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 03:04:07.451985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1216 03:04:07.452654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 03:04:07.452686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1216 03:04:08.323641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 03:04:08.323682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 03:04:08.350307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 03:04:08.350333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1216 03:04:08.361065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 03:04:08.361093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1216 03:04:08.479394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 03:04:08.479425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1216 03:04:08.527949       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 03:04:08.527977       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:04:08.639931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 03:04:08.639967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 03:04:08.642283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 03:04:08.642313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1216 03:04:11.648165       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 03:04:23 old-k8s-version-073001 kubelet[1398]: I1216 03:04:23.541601    1398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.079872    1398 topology_manager.go:215] "Topology Admit Handler" podUID="ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08" podNamespace="kube-system" podName="kindnet-8qgxg"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.081685    1398 topology_manager.go:215] "Topology Admit Handler" podUID="427da05c-6160-4d42-ae08-2c49bb47dcb1" podNamespace="kube-system" podName="kube-proxy-mhxd9"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132320    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt9t2\" (UniqueName: \"kubernetes.io/projected/ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08-kube-api-access-xt9t2\") pod \"kindnet-8qgxg\" (UID: \"ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08\") " pod="kube-system/kindnet-8qgxg"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132376    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/427da05c-6160-4d42-ae08-2c49bb47dcb1-xtables-lock\") pod \"kube-proxy-mhxd9\" (UID: \"427da05c-6160-4d42-ae08-2c49bb47dcb1\") " pod="kube-system/kube-proxy-mhxd9"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132408    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6cb\" (UniqueName: \"kubernetes.io/projected/427da05c-6160-4d42-ae08-2c49bb47dcb1-kube-api-access-7h6cb\") pod \"kube-proxy-mhxd9\" (UID: \"427da05c-6160-4d42-ae08-2c49bb47dcb1\") " pod="kube-system/kube-proxy-mhxd9"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132438    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08-xtables-lock\") pod \"kindnet-8qgxg\" (UID: \"ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08\") " pod="kube-system/kindnet-8qgxg"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132540    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08-cni-cfg\") pod \"kindnet-8qgxg\" (UID: \"ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08\") " pod="kube-system/kindnet-8qgxg"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132595    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08-lib-modules\") pod \"kindnet-8qgxg\" (UID: \"ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08\") " pod="kube-system/kindnet-8qgxg"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132624    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/427da05c-6160-4d42-ae08-2c49bb47dcb1-kube-proxy\") pod \"kube-proxy-mhxd9\" (UID: \"427da05c-6160-4d42-ae08-2c49bb47dcb1\") " pod="kube-system/kube-proxy-mhxd9"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.132668    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/427da05c-6160-4d42-ae08-2c49bb47dcb1-lib-modules\") pod \"kube-proxy-mhxd9\" (UID: \"427da05c-6160-4d42-ae08-2c49bb47dcb1\") " pod="kube-system/kube-proxy-mhxd9"
	Dec 16 03:04:24 old-k8s-version-073001 kubelet[1398]: I1216 03:04:24.613809    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mhxd9" podStartSLOduration=0.613760331 podCreationTimestamp="2025-12-16 03:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:24.613578499 +0000 UTC m=+14.150012982" watchObservedRunningTime="2025-12-16 03:04:24.613760331 +0000 UTC m=+14.150194793"
	Dec 16 03:04:26 old-k8s-version-073001 kubelet[1398]: I1216 03:04:26.621459    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8qgxg" podStartSLOduration=1.114331871 podCreationTimestamp="2025-12-16 03:04:24 +0000 UTC" firstStartedPulling="2025-12-16 03:04:24.390363813 +0000 UTC m=+13.926798262" lastFinishedPulling="2025-12-16 03:04:25.897437845 +0000 UTC m=+15.433872297" observedRunningTime="2025-12-16 03:04:26.621335892 +0000 UTC m=+16.157770375" watchObservedRunningTime="2025-12-16 03:04:26.621405906 +0000 UTC m=+16.157840366"
	Dec 16 03:04:36 old-k8s-version-073001 kubelet[1398]: I1216 03:04:36.945186    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 16 03:04:36 old-k8s-version-073001 kubelet[1398]: I1216 03:04:36.966860    1398 topology_manager.go:215] "Topology Admit Handler" podUID="9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786" podNamespace="kube-system" podName="storage-provisioner"
	Dec 16 03:04:36 old-k8s-version-073001 kubelet[1398]: I1216 03:04:36.969114    1398 topology_manager.go:215] "Topology Admit Handler" podUID="d193df22-756a-429b-b218-48251e837115" podNamespace="kube-system" podName="coredns-5dd5756b68-8lk58"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.022041    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786-tmp\") pod \"storage-provisioner\" (UID: \"9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786\") " pod="kube-system/storage-provisioner"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.022089    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whh7m\" (UniqueName: \"kubernetes.io/projected/d193df22-756a-429b-b218-48251e837115-kube-api-access-whh7m\") pod \"coredns-5dd5756b68-8lk58\" (UID: \"d193df22-756a-429b-b218-48251e837115\") " pod="kube-system/coredns-5dd5756b68-8lk58"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.022120    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbm56\" (UniqueName: \"kubernetes.io/projected/9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786-kube-api-access-lbm56\") pod \"storage-provisioner\" (UID: \"9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786\") " pod="kube-system/storage-provisioner"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.022204    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d193df22-756a-429b-b218-48251e837115-config-volume\") pod \"coredns-5dd5756b68-8lk58\" (UID: \"d193df22-756a-429b-b218-48251e837115\") " pod="kube-system/coredns-5dd5756b68-8lk58"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.647751    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8lk58" podStartSLOduration=13.647701724000001 podCreationTimestamp="2025-12-16 03:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:37.647640191 +0000 UTC m=+27.184074659" watchObservedRunningTime="2025-12-16 03:04:37.647701724 +0000 UTC m=+27.184136184"
	Dec 16 03:04:37 old-k8s-version-073001 kubelet[1398]: I1216 03:04:37.656271    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.656220516 podCreationTimestamp="2025-12-16 03:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:37.655999389 +0000 UTC m=+27.192433850" watchObservedRunningTime="2025-12-16 03:04:37.656220516 +0000 UTC m=+27.192654988"
	Dec 16 03:04:39 old-k8s-version-073001 kubelet[1398]: I1216 03:04:39.856594    1398 topology_manager.go:215] "Topology Admit Handler" podUID="68715bfa-1969-4519-9966-8409fc51c09f" podNamespace="default" podName="busybox"
	Dec 16 03:04:39 old-k8s-version-073001 kubelet[1398]: I1216 03:04:39.941431    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlncg\" (UniqueName: \"kubernetes.io/projected/68715bfa-1969-4519-9966-8409fc51c09f-kube-api-access-mlncg\") pod \"busybox\" (UID: \"68715bfa-1969-4519-9966-8409fc51c09f\") " pod="default/busybox"
	Dec 16 03:04:41 old-k8s-version-073001 kubelet[1398]: I1216 03:04:41.657158    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.350473906 podCreationTimestamp="2025-12-16 03:04:39 +0000 UTC" firstStartedPulling="2025-12-16 03:04:40.180448477 +0000 UTC m=+29.716882920" lastFinishedPulling="2025-12-16 03:04:41.487088593 +0000 UTC m=+31.023523043" observedRunningTime="2025-12-16 03:04:41.656901341 +0000 UTC m=+31.193335803" watchObservedRunningTime="2025-12-16 03:04:41.657114029 +0000 UTC m=+31.193548488"
	
	
	==> storage-provisioner [94a56f7cb950632d2cbb008b049e62b4fcef39c8e49f75daa65e40d0c1a7e666] <==
	I1216 03:04:37.335300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:04:37.343663       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:04:37.343714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 03:04:37.352290       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:04:37.352346       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55598adf-c5c2-4b9b-a5f6-64fff021d0ce", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-073001_593d5001-77cc-48f9-aeff-8ce7b21e72cb became leader
	I1216 03:04:37.352479       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_593d5001-77cc-48f9-aeff-8ce7b21e72cb!
	I1216 03:04:37.453575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_593d5001-77cc-48f9-aeff-8ce7b21e72cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-073001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (321.245602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:04:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-307185 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-307185 describe deploy/metrics-server -n kube-system: exit status 1 (79.240842ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-307185 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307185
helpers_test.go:244: (dbg) docker inspect no-preload-307185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	        "Created": "2025-12-16T03:03:57.812441327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:03:57.854468804Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hostname",
	        "HostsPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hosts",
	        "LogPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db-json.log",
	        "Name": "/no-preload-307185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-307185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	                "LowerDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307185",
	                "Source": "/var/lib/docker/volumes/no-preload-307185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307185",
	                "name.minikube.sigs.k8s.io": "no-preload-307185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "36d83763c06377bce26c7b8430ed53199f8ddd55c15e08a7338942d5dba3fcaf",
	            "SandboxKey": "/var/run/docker/netns/36d83763c063",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-307185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90167d09366ac94fe3d8c3c2c088a58bdbd0aa8f97facfeb6de0aac99571708a",
	                    "EndpointID": "4228a6d21d7db6101d007d344adf1809dcb96a9757616af63b242786008d5b15",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1e:81:42:b0:8b:75",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307185",
	                        "995416161edc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-307185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-307185 logs -n 25: (1.199893894s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-646016 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo docker system info                                                                                                                                                                                                      │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo containerd config dump                                                                                                                                                                                                  │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo crio config                                                                                                                                                                                                             │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p cilium-646016                                                                                                                                                                                                                              │ cilium-646016          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-073001 │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-027639    │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                        │ NoKubernetes-027639    │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-307185      │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-073001 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-073001 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-307185      │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                     │ running-upgrade-146373 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:03:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:03:56.983492  266278 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:03:56.983587  266278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:56.983599  266278 out.go:374] Setting ErrFile to fd 2...
	I1216 03:03:56.983606  266278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:56.983800  266278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:03:56.984344  266278 out.go:368] Setting JSON to false
	I1216 03:03:56.985440  266278 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2789,"bootTime":1765851448,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:03:56.985498  266278 start.go:143] virtualization: kvm guest
	I1216 03:03:56.987509  266278 out.go:179] * [no-preload-307185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:03:56.989006  266278 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:03:56.989008  266278 notify.go:221] Checking for updates...
	I1216 03:03:56.991516  266278 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:03:56.992646  266278 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:03:56.993773  266278 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:03:56.994992  266278 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:03:56.996003  266278 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:03:56.997638  266278 config.go:182] Loaded profile config "kubernetes-upgrade-058433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:03:56.997737  266278 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:03:56.997803  266278 config.go:182] Loaded profile config "running-upgrade-146373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 03:03:56.997957  266278 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:03:57.022553  266278 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:03:57.022679  266278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:03:57.077939  266278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:03:57.067316279 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:03:57.078050  266278 docker.go:319] overlay module found
	I1216 03:03:57.079834  266278 out.go:179] * Using the docker driver based on user configuration
	I1216 03:03:57.081152  266278 start.go:309] selected driver: docker
	I1216 03:03:57.081167  266278 start.go:927] validating driver "docker" against <nil>
	I1216 03:03:57.081178  266278 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:03:57.081715  266278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:03:57.138179  266278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:03:57.128436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:03:57.138343  266278 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:03:57.138544  266278 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:03:57.140263  266278 out.go:179] * Using Docker driver with root privileges
	I1216 03:03:57.141488  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:03:57.141558  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:03:57.141568  266278 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:03:57.141625  266278 start.go:353] cluster config:
	{Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:03:57.142997  266278 out.go:179] * Starting "no-preload-307185" primary control-plane node in "no-preload-307185" cluster
	I1216 03:03:57.144118  266278 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:03:57.145253  266278 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:03:57.146353  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:03:57.146455  266278 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:03:57.146467  266278 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json ...
	I1216 03:03:57.146501  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json: {Name:mk19c39507f62b1421041e099e0fa2ad8af7d345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:03:57.146643  266278 cache.go:107] acquiring lock: {Name:mk9c043df005d5db5fe4723c7121f40ea0f1812e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146679  266278 cache.go:107] acquiring lock: {Name:mkdf57b3d7d678135b23a9c051c86f85f24445d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146706  266278 cache.go:107] acquiring lock: {Name:mkdd9488923482e72919ad32bb6f5b3b308df98d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146709  266278 cache.go:107] acquiring lock: {Name:mk85875299d4b06a340bacb43fc637fd3eac0534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146765  266278 cache.go:107] acquiring lock: {Name:mke4bbadab765c4e0f220f70570523f5ea9b2203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146810  266278 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:03:57.146837  266278 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:03:57.146845  266278 cache.go:107] acquiring lock: {Name:mk4b159c6dc596e5ca3ffca7550c82c8dbbfcee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146874  266278 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:03:57.146894  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1216 03:03:57.146908  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 03:03:57.146912  266278 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 70.033µs
	I1216 03:03:57.146800  266278 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:03:57.146928  266278 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 03:03:57.146920  266278 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 246.615µs
	I1216 03:03:57.146939  266278 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 03:03:57.146938  266278 cache.go:107] acquiring lock: {Name:mk515b27b0b3a5786bafab82ddd54f4df9a8b6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.146648  266278 cache.go:107] acquiring lock: {Name:mkb5a2a6366f972707bdae2fa0fdae7fc7a4a37e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.147107  266278 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:03:57.147118  266278 cache.go:115] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:03:57.147129  266278 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 505.432µs
	I1216 03:03:57.147138  266278 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:03:57.148062  266278 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:03:57.148060  266278 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:03:57.148061  266278 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:03:57.148061  266278 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:03:57.148062  266278 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:03:57.169114  266278 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:03:57.169139  266278 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:03:57.169160  266278 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:03:57.169194  266278 start.go:360] acquireMachinesLock for no-preload-307185: {Name:mk94feb63e5fbefef1b2772890835ef937ceebef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:03:57.169298  266278 start.go:364] duration metric: took 84.161µs to acquireMachinesLock for "no-preload-307185"
	I1216 03:03:57.169330  266278 start.go:93] Provisioning new machine with config: &{Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:03:57.169436  266278 start.go:125] createHost starting for "" (driver="docker")
	W1216 03:03:52.938249  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:52.941147  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:52.941162  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:53.015905  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:53.015939  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:53.058020  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:53.058049  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:53.093123  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:53.093146  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:55.631896  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:03:55.632410  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:03:55.632467  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:55.632536  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:55.675087  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:55.675108  224341 cri.go:89] found id: ""
	I1216 03:03:55.675116  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:03:55.675168  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.679048  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:55.679114  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:55.718858  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:55.718881  224341 cri.go:89] found id: ""
	I1216 03:03:55.718891  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:03:55.718957  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.723103  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:55.723161  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:55.760013  224341 cri.go:89] found id: ""
	I1216 03:03:55.760038  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.760049  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:03:55.760056  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:55.760111  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:55.801849  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:55.801873  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:55.801879  224341 cri.go:89] found id: ""
	I1216 03:03:55.801888  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:03:55.801945  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.805756  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.809415  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:55.809473  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:55.845426  224341 cri.go:89] found id: ""
	I1216 03:03:55.845452  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.845464  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:55.845472  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:55.845527  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:55.882578  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:55.882604  224341 cri.go:89] found id: ""
	I1216 03:03:55.882613  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:03:55.882676  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.886726  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:55.886786  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:55.926694  224341 cri.go:89] found id: ""
	I1216 03:03:55.926716  224341 logs.go:282] 0 containers: []
	W1216 03:03:55.926724  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:55.926732  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:55.926786  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:55.963541  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:55.963566  224341 cri.go:89] found id: ""
	I1216 03:03:55.963577  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:03:55.963635  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.967619  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:55.967640  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:56.083090  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:03:56.083123  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:56.132143  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:56.132173  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:56.204730  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:56.204758  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:56.246643  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:56.246673  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:56.279900  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:56.279925  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:56.316599  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:56.316630  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:56.332057  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:56.332082  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:56.389756  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:56.389775  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:03:56.389787  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:56.426453  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:03:56.426479  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:56.459761  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:56.459784  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:55.388959  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:03:55.389383  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:03:55.389435  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:55.389482  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:55.420377  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:55.420409  233647 cri.go:89] found id: ""
	I1216 03:03:55.420416  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:03:55.420470  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.424737  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:55.424811  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:55.454011  233647 cri.go:89] found id: ""
	I1216 03:03:55.454034  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.454044  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:03:55.454050  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:55.454102  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:55.484241  233647 cri.go:89] found id: ""
	I1216 03:03:55.484281  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.484293  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:03:55.484301  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:55.484366  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:55.516302  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:55.516334  233647 cri.go:89] found id: ""
	I1216 03:03:55.516346  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:03:55.516404  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.520637  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:55.520701  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:55.551334  233647 cri.go:89] found id: ""
	I1216 03:03:55.551363  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.551375  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:55.551388  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:55.551443  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:55.582038  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:55.582057  233647 cri.go:89] found id: ""
	I1216 03:03:55.582064  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:03:55.582106  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:55.586264  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:55.586335  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:55.617074  233647 cri.go:89] found id: ""
	I1216 03:03:55.617099  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.617107  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:55.617113  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:55.617194  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:55.651370  233647 cri.go:89] found id: ""
	I1216 03:03:55.651398  233647 logs.go:282] 0 containers: []
	W1216 03:03:55.651409  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:03:55.651421  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:55.651437  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:55.725358  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:03:55.725390  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:55.759551  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:55.759591  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:55.852078  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:55.852114  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:55.866931  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:55.866958  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:55.928907  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:55.928929  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:03:55.928944  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:55.962427  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:03:55.962456  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:55.992620  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:03:55.992649  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:54.623181  263091 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-073001:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (5.020901186s)
	I1216 03:03:54.623215  263091 kic.go:203] duration metric: took 5.021054298s to extract preloaded images to volume ...
	W1216 03:03:54.623327  263091 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:03:54.623370  263091 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:03:54.623421  263091 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:03:54.681873  263091 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-073001 --name old-k8s-version-073001 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-073001 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-073001 --network old-k8s-version-073001 --ip 192.168.103.2 --volume old-k8s-version-073001:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:03:54.964409  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Running}}
	I1216 03:03:54.987030  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.008874  263091 cli_runner.go:164] Run: docker exec old-k8s-version-073001 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:03:55.055890  263091 oci.go:144] the created container "old-k8s-version-073001" has a running status.
	I1216 03:03:55.055922  263091 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa...
	I1216 03:03:55.128834  263091 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:03:55.155140  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.177927  263091 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:03:55.177952  263091 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-073001 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:03:55.229398  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:03:55.254475  263091 machine.go:94] provisionDockerMachine start ...
	I1216 03:03:55.254607  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:55.283793  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:55.284342  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:55.284386  263091 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:03:55.285961  263091 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38330->127.0.0.1:33058: read: connection reset by peer
	I1216 03:03:58.456343  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-073001
	
	I1216 03:03:58.456407  263091 ubuntu.go:182] provisioning hostname "old-k8s-version-073001"
	I1216 03:03:58.456479  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:58.484352  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.484793  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:58.484905  263091 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-073001 && echo "old-k8s-version-073001" | sudo tee /etc/hostname
	I1216 03:03:58.650136  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-073001
	
	I1216 03:03:58.650230  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:58.671013  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.671334  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:58.671367  263091 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-073001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-073001/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-073001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:03:58.815635  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:03:58.815663  263091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:03:58.815707  263091 ubuntu.go:190] setting up certificates
	I1216 03:03:58.815720  263091 provision.go:84] configureAuth start
	I1216 03:03:58.815795  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:58.836594  263091 provision.go:143] copyHostCerts
	I1216 03:03:58.836655  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:03:58.836668  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:03:58.836748  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:03:58.836866  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:03:58.836877  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:03:58.836912  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:03:58.836990  263091 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:03:58.836999  263091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:03:58.837032  263091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:03:58.837089  263091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-073001 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-073001]
	I1216 03:03:59.007674  263091 provision.go:177] copyRemoteCerts
	I1216 03:03:59.007734  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:03:59.007768  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.027988  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.129477  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:03:59.152712  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 03:03:59.172888  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:03:59.192518  263091 provision.go:87] duration metric: took 376.770342ms to configureAuth
	I1216 03:03:59.192548  263091 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:03:59.192725  263091 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:03:59.192814  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.212927  263091 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:59.213250  263091 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1216 03:03:59.213271  263091 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:03:59.500792  263091 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:03:59.500840  263091 machine.go:97] duration metric: took 4.246308922s to provisionDockerMachine
	I1216 03:03:59.500854  263091 client.go:176] duration metric: took 10.476226918s to LocalClient.Create
	I1216 03:03:59.500871  263091 start.go:167] duration metric: took 10.476282253s to libmachine.API.Create "old-k8s-version-073001"
	I1216 03:03:59.500880  263091 start.go:293] postStartSetup for "old-k8s-version-073001" (driver="docker")
	I1216 03:03:59.500893  263091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:03:59.500987  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:03:59.501036  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.520589  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.622986  263091 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:03:59.626690  263091 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:03:59.626728  263091 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:03:59.626741  263091 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:03:59.626796  263091 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:03:59.626958  263091 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:03:59.627089  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:03:59.635782  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:03:59.656175  263091 start.go:296] duration metric: took 155.280035ms for postStartSetup
	I1216 03:03:59.656535  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:59.674339  263091 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/config.json ...
	I1216 03:03:59.674668  263091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:03:59.674735  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.693605  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.791302  263091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:03:59.795977  263091 start.go:128] duration metric: took 10.774398328s to createHost
	I1216 03:03:59.796001  263091 start.go:83] releasing machines lock for "old-k8s-version-073001", held for 10.774640668s
	I1216 03:03:59.796084  263091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-073001
	I1216 03:03:59.814619  263091 ssh_runner.go:195] Run: cat /version.json
	I1216 03:03:59.814644  263091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:03:59.814665  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.814733  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:03:59.834525  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.835504  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:03:59.981381  263091 ssh_runner.go:195] Run: systemctl --version
	I1216 03:03:59.987796  263091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:04:00.021679  263091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:04:00.026871  263091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:04:00.026942  263091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:04:00.053045  263091 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:04:00.053078  263091 start.go:496] detecting cgroup driver to use...
	I1216 03:04:00.053113  263091 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:04:00.053172  263091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:04:00.068945  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:04:00.080545  263091 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:04:00.080600  263091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:04:00.096421  263091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:04:00.113104  263091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:04:00.196023  263091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:04:00.281153  263091 docker.go:234] disabling docker service ...
	I1216 03:04:00.281211  263091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:04:00.300014  263091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:04:00.313878  263091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:04:00.397412  263091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:04:00.481876  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:04:00.494121  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:04:00.508320  263091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1216 03:04:00.508377  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.518381  263091 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:04:00.518454  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.527521  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.536040  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.544510  263091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:04:00.552319  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.560698  263091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.573795  263091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:00.581942  263091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:04:00.590002  263091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:04:00.597208  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:00.677744  263091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:04:00.889093  263091 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:04:00.889166  263091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:04:00.893069  263091 start.go:564] Will wait 60s for crictl version
	I1216 03:04:00.893115  263091 ssh_runner.go:195] Run: which crictl
	I1216 03:04:00.896568  263091 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:04:00.920645  263091 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:04:00.920708  263091 ssh_runner.go:195] Run: crio --version
	I1216 03:04:00.947453  263091 ssh_runner.go:195] Run: crio --version
	I1216 03:04:00.976522  263091 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1216 03:03:57.171991  266278 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:03:57.172231  266278 start.go:159] libmachine.API.Create for "no-preload-307185" (driver="docker")
	I1216 03:03:57.172278  266278 client.go:173] LocalClient.Create starting
	I1216 03:03:57.172336  266278 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:03:57.172375  266278 main.go:143] libmachine: Decoding PEM data...
	I1216 03:03:57.172407  266278 main.go:143] libmachine: Parsing certificate...
	I1216 03:03:57.172475  266278 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:03:57.172503  266278 main.go:143] libmachine: Decoding PEM data...
	I1216 03:03:57.172519  266278 main.go:143] libmachine: Parsing certificate...
	I1216 03:03:57.172867  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:03:57.191305  266278 cli_runner.go:211] docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:03:57.191369  266278 network_create.go:284] running [docker network inspect no-preload-307185] to gather additional debugging logs...
	I1216 03:03:57.191392  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185
	W1216 03:03:57.208540  266278 cli_runner.go:211] docker network inspect no-preload-307185 returned with exit code 1
	I1216 03:03:57.208570  266278 network_create.go:287] error running [docker network inspect no-preload-307185]: docker network inspect no-preload-307185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-307185 not found
	I1216 03:03:57.208580  266278 network_create.go:289] output of [docker network inspect no-preload-307185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-307185 not found
	
	** /stderr **
	I1216 03:03:57.208657  266278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:03:57.227426  266278 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:03:57.228255  266278 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:03:57.230458  266278 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:03:57.231065  266278 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d7bad883e2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:16:93:66:19:b2} reservation:<nil>}
	I1216 03:03:57.231526  266278 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bbdfab3d6d3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d6:5a:a2:42:00:d9} reservation:<nil>}
	I1216 03:03:57.232342  266278 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240e720}
	I1216 03:03:57.232364  266278 network_create.go:124] attempt to create docker network no-preload-307185 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 03:03:57.232416  266278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-307185 no-preload-307185
	I1216 03:03:57.280720  266278 network_create.go:108] docker network no-preload-307185 192.168.94.0/24 created
	I1216 03:03:57.280753  266278 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-307185" container
	I1216 03:03:57.280836  266278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:03:57.298554  266278 cli_runner.go:164] Run: docker volume create no-preload-307185 --label name.minikube.sigs.k8s.io=no-preload-307185 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:03:57.300921  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:03:57.310882  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:03:57.315262  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:03:57.317224  266278 oci.go:103] Successfully created a docker volume no-preload-307185
	I1216 03:03:57.317284  266278 cli_runner.go:164] Run: docker run --rm --name no-preload-307185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307185 --entrypoint /usr/bin/test -v no-preload-307185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:03:57.318500  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:03:57.324309  266278 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:03:57.739389  266278 oci.go:107] Successfully prepared a docker volume no-preload-307185
	I1216 03:03:57.739478  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1216 03:03:57.739560  266278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:03:57.739598  266278 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:03:57.739639  266278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:03:57.796477  266278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-307185 --name no-preload-307185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-307185 --network no-preload-307185 --ip 192.168.94.2 --volume no-preload-307185:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:03:57.846529  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 03:03:57.846561  266278 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 699.930372ms
	I1216 03:03:57.846577  266278 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 03:03:58.070647  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Running}}
	I1216 03:03:58.089307  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.108349  266278 cli_runner.go:164] Run: docker exec no-preload-307185 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:03:58.153356  266278 oci.go:144] the created container "no-preload-307185" has a running status.
	I1216 03:03:58.153383  266278 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa...
	I1216 03:03:58.196279  266278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:03:58.228249  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.247407  266278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:03:58.247425  266278 kic_runner.go:114] Args: [docker exec --privileged no-preload-307185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:03:58.288115  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:03:58.312221  266278 machine.go:94] provisionDockerMachine start ...
	I1216 03:03:58.312319  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:03:58.333144  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:03:58.333595  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:03:58.333613  266278 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:03:58.334580  266278 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35744->127.0.0.1:33063: read: connection reset by peer
	I1216 03:03:58.477476  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 03:03:58.477521  266278 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.330658915s
	I1216 03:03:58.477545  266278 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 03:03:58.580964  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 03:03:58.581001  266278 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.434304663s
	I1216 03:03:58.581020  266278 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 03:03:58.607058  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 03:03:58.607089  266278 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.460409486s
	I1216 03:03:58.607104  266278 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 03:03:58.613940  266278 cache.go:157] /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 03:03:58.613971  266278 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.467270523s
	I1216 03:03:58.613984  266278 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 03:03:58.613998  266278 cache.go:87] Successfully saved all images to host disk.
	I1216 03:04:01.474151  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307185
	
	I1216 03:04:01.474182  266278 ubuntu.go:182] provisioning hostname "no-preload-307185"
	I1216 03:04:01.474247  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.493078  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:01.493279  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:01.493291  266278 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307185 && echo "no-preload-307185" | sudo tee /etc/hostname
	I1216 03:04:01.638433  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307185
	
	I1216 03:04:01.638534  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.657194  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:01.657441  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:01.657466  266278 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:04:01.798237  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:04:01.798276  266278 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:04:01.798306  266278 ubuntu.go:190] setting up certificates
	I1216 03:04:01.798325  266278 provision.go:84] configureAuth start
	I1216 03:04:01.798383  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:01.819725  266278 provision.go:143] copyHostCerts
	I1216 03:04:01.819800  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:04:01.819831  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:04:01.819926  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:04:01.820050  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:04:01.820061  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:04:01.820092  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:04:01.820173  266278 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:04:01.820184  266278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:04:01.820222  266278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:04:01.820293  266278 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.no-preload-307185 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-307185]
	I1216 03:04:01.860212  266278 provision.go:177] copyRemoteCerts
	I1216 03:04:01.860275  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:04:01.860325  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:01.882953  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:00.977648  263091 cli_runner.go:164] Run: docker network inspect old-k8s-version-073001 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:04:00.995145  263091 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 03:04:00.999654  263091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:01.010173  263091 kubeadm.go:884] updating cluster {Name:old-k8s-version-073001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:04:01.010302  263091 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 03:04:01.010342  263091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:01.039297  263091 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:04:01.039315  263091 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:04:01.039356  263091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:01.064295  263091 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:04:01.064318  263091 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:04:01.064325  263091 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1216 03:04:01.064420  263091 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-073001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:04:01.064521  263091 ssh_runner.go:195] Run: crio config
	I1216 03:04:01.111617  263091 cni.go:84] Creating CNI manager for ""
	I1216 03:04:01.111639  263091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:01.111658  263091 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:04:01.111677  263091 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-073001 NodeName:old-k8s-version-073001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:04:01.111801  263091 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-073001"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:04:01.111882  263091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1216 03:04:01.120315  263091 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:04:01.120386  263091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:04:01.128158  263091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1216 03:04:01.140872  263091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:04:01.156027  263091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1216 03:04:01.169049  263091 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:04:01.172565  263091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:01.182276  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:01.257167  263091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:01.280128  263091 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001 for IP: 192.168.103.2
	I1216 03:04:01.280147  263091 certs.go:195] generating shared ca certs ...
	I1216 03:04:01.280161  263091 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.280326  263091 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:04:01.280379  263091 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:04:01.280393  263091 certs.go:257] generating profile certs ...
	I1216 03:04:01.280451  263091 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key
	I1216 03:04:01.280479  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt with IP's: []
	I1216 03:04:01.425484  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt ...
	I1216 03:04:01.425510  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: {Name:mkf3a97c40568c5da3dda20123f4fc0fbbbff9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.425672  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key ...
	I1216 03:04:01.425689  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.key: {Name:mk95a16ee8f617246fdcb4f60fa48de82ac6ac5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.425769  263091 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e
	I1216 03:04:01.425787  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 03:04:01.512209  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e ...
	I1216 03:04:01.512238  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e: {Name:mk0cbda35d36fb3fc71fcbe38ba1d3cc195a5c18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.512402  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e ...
	I1216 03:04:01.512426  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e: {Name:mkf56f104ddb198bf3a0bef363952da2f9a9ac80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.512509  263091 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt.a0087f3e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt
	I1216 03:04:01.512587  263091 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key.a0087f3e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key
	I1216 03:04:01.512651  263091 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key
	I1216 03:04:01.512669  263091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt with IP's: []
	I1216 03:04:01.569012  263091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt ...
	I1216 03:04:01.569039  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt: {Name:mk63e7a75f98b5aa22fbfa8098ca980a7e4c9675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.569238  263091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key ...
	I1216 03:04:01.569259  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key: {Name:mkf4d398a37db0a29ab34e32185a5e96ebd560d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:01.569490  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:04:01.569534  263091 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:04:01.569546  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:04:01.569575  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:04:01.569603  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:04:01.569629  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:04:01.569687  263091 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:01.570352  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:04:01.589543  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:04:01.606382  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:04:01.623276  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:04:01.641474  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 03:04:01.659990  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1216 03:04:01.678537  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:04:01.697868  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:04:01.717445  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:04:01.741011  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:04:01.759643  263091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:04:01.779794  263091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:04:01.792350  263091 ssh_runner.go:195] Run: openssl version
	I1216 03:04:01.798991  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.807501  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:04:01.815525  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.819507  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.819556  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:01.862037  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:04:01.870951  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:04:01.879762  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.887671  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:04:01.896061  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.900463  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.900523  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:04:01.936042  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:04:01.943747  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:04:01.951160  263091 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.959036  263091 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:04:01.966588  263091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.970635  263091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:04:01.970694  263091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:04:02.013479  263091 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:02.021983  263091 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:02.029777  263091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:04:02.033458  263091 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:04:02.033531  263091 kubeadm.go:401] StartCluster: {Name:old-k8s-version-073001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-073001 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:04:02.033632  263091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:04:02.033690  263091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:04:02.064462  263091 cri.go:89] found id: ""
	I1216 03:04:02.064525  263091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:04:02.074100  263091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:02.082707  263091 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:02.082771  263091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:02.091243  263091 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:02.091264  263091 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:02.091309  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:02.099254  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:02.099315  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:02.108014  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:02.118590  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:02.118644  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:02.127619  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:02.136734  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:02.136805  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:02.146173  263091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:02.153969  263091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:02.154021  263091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:02.162764  263091 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:02.206957  263091 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1216 03:04:02.207063  263091 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:02.244835  263091 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:02.244926  263091 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:04:02.245096  263091 kubeadm.go:319] OS: Linux
	I1216 03:04:02.245186  263091 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:02.245263  263091 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:02.245334  263091 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:02.245424  263091 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:02.245510  263091 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:02.245593  263091 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:02.245676  263091 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:02.245739  263091 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:04:02.317952  263091 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:02.318083  263091 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:02.318198  263091 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 03:04:02.478593  263091 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:03:59.011887  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:03:59.012379  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:03:59.012446  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:59.012502  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:59.052877  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:59.052900  224341 cri.go:89] found id: ""
	I1216 03:03:59.052911  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:03:59.052971  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.057387  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:59.057450  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:59.097631  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:59.097657  224341 cri.go:89] found id: ""
	I1216 03:03:59.097666  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:03:59.097712  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.101698  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:59.101767  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:59.139530  224341 cri.go:89] found id: ""
	I1216 03:03:59.139550  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.139557  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:03:59.139562  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:59.139624  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:59.178228  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:59.178251  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:03:59.178256  224341 cri.go:89] found id: ""
	I1216 03:03:59.178268  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:03:59.178331  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.182126  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.185622  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:59.185688  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:59.222091  224341 cri.go:89] found id: ""
	I1216 03:03:59.222118  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.222128  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:59.222137  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:59.222199  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:59.256676  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:59.256703  224341 cri.go:89] found id: ""
	I1216 03:03:59.256713  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:03:59.256769  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.260657  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:59.260733  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:59.295566  224341 cri.go:89] found id: ""
	I1216 03:03:59.295589  224341 logs.go:282] 0 containers: []
	W1216 03:03:59.295601  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:59.295609  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:59.295672  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:59.330182  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:59.330210  224341 cri.go:89] found id: ""
	I1216 03:03:59.330220  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:03:59.330283  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:03:59.333999  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:03:59.334026  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:59.368515  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:59.368540  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:59.466981  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:59.467013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:59.529687  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:59.529710  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:03:59.529725  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:03:59.578653  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:03:59.578681  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:03:59.612835  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:59.612862  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:59.665540  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:03:59.665571  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:59.704770  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:59.704795  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:59.720158  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:03:59.720183  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:03:59.757349  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:03:59.757377  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:03:59.834135  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:03:59.834176  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.377927  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:02.378373  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:02.378505  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:02.378564  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:02.418869  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:02.418897  224341 cri.go:89] found id: ""
	I1216 03:04:02.418908  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:02.419068  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.423024  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:02.423067  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:02.465875  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:02.465901  224341 cri.go:89] found id: ""
	I1216 03:04:02.465912  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:02.465977  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.470098  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:02.470182  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:02.508022  224341 cri.go:89] found id: ""
	I1216 03:04:02.508046  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.508056  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:02.508076  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:02.508181  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:02.546221  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:02.546243  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.546249  224341 cri.go:89] found id: ""
	I1216 03:04:02.546256  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:02.546299  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.551252  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.555153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:02.555221  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:02.591843  224341 cri.go:89] found id: ""
	I1216 03:04:02.591869  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.591880  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:02.591889  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:02.591950  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:02.628765  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:02.628783  224341 cri.go:89] found id: ""
	I1216 03:04:02.628791  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:02.628860  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.632557  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:02.632618  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:02.667221  224341 cri.go:89] found id: ""
	I1216 03:04:02.667242  224341 logs.go:282] 0 containers: []
	W1216 03:04:02.667252  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:02.667259  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:02.667307  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:02.707718  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:02.707741  224341 cri.go:89] found id: ""
	I1216 03:04:02.707753  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:02.707810  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:02.712492  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:02.712519  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:02.731813  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:02.731855  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:02.808682  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:02.808712  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:02.857865  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:02.857899  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:02.901942  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:02.901969  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:03:58.523674  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:03:58.524025  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:03:58.524069  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:03:58.524355  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:03:58.567521  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:58.567542  233647 cri.go:89] found id: ""
	I1216 03:03:58.567552  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:03:58.567607  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.572445  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:03:58.572507  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:03:58.611526  233647 cri.go:89] found id: ""
	I1216 03:03:58.611551  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.611563  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:03:58.611570  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:03:58.611625  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:03:58.640517  233647 cri.go:89] found id: ""
	I1216 03:03:58.640546  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.640559  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:03:58.640568  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:03:58.640632  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:03:58.671980  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:58.672001  233647 cri.go:89] found id: ""
	I1216 03:03:58.672010  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:03:58.672061  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.676210  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:03:58.676278  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:03:58.705573  233647 cri.go:89] found id: ""
	I1216 03:03:58.705598  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.705607  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:03:58.705613  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:03:58.705658  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:03:58.734318  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:03:58.734341  233647 cri.go:89] found id: ""
	I1216 03:03:58.734350  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:03:58.734415  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:03:58.738863  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:03:58.738929  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:03:58.767548  233647 cri.go:89] found id: ""
	I1216 03:03:58.767576  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.767588  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:03:58.767595  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:03:58.767650  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:03:58.794764  233647 cri.go:89] found id: ""
	I1216 03:03:58.794793  233647 logs.go:282] 0 containers: []
	W1216 03:03:58.794805  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:03:58.794829  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:03:58.794844  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:03:58.865251  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:03:58.865285  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:03:58.896449  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:03:58.896473  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:03:58.985656  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:03:58.985692  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:03:59.000758  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:03:59.000792  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:03:59.067015  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:03:59.067040  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:03:59.067057  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:03:59.099073  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:03:59.099101  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:03:59.127092  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:03:59.127131  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:01.657881  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:01.658302  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:01.658359  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:01.658408  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:01.687548  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:01.687566  233647 cri.go:89] found id: ""
	I1216 03:04:01.687573  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:01.687619  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.691370  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:01.691434  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:01.720852  233647 cri.go:89] found id: ""
	I1216 03:04:01.720875  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.720885  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:01.720891  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:01.720947  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:01.748699  233647 cri.go:89] found id: ""
	I1216 03:04:01.748720  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.748727  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:01.748733  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:01.748849  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:01.777606  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:01.777632  233647 cri.go:89] found id: ""
	I1216 03:04:01.777643  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:01.777697  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.781795  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:01.781879  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:01.810401  233647 cri.go:89] found id: ""
	I1216 03:04:01.810425  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.810436  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:01.810444  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:01.810493  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:01.839874  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:01.839896  233647 cri.go:89] found id: ""
	I1216 03:04:01.839906  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:01.839962  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:01.843981  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:01.844034  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:01.874012  233647 cri.go:89] found id: ""
	I1216 03:04:01.874033  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.874041  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:01.874047  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:01.874097  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:01.903251  233647 cri.go:89] found id: ""
	I1216 03:04:01.903274  233647 logs.go:282] 0 containers: []
	W1216 03:04:01.903284  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:01.903295  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:01.903312  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:01.959343  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:01.959365  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:01.959380  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:01.991099  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:01.991124  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:02.019324  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:02.019365  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:02.047245  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:02.047268  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:02.108580  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:02.108615  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:02.143712  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:02.143748  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:02.236341  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:02.236379  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:02.481144  263091 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:02.481263  263091 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:02.481391  263091 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:02.595419  263091 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:04:02.721379  263091 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:04:02.830290  263091 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:04:02.981811  263091 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:04:03.220599  263091 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:04:03.220845  263091 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-073001] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 03:04:03.405917  263091 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:04:03.406077  263091 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-073001] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 03:04:03.572863  263091 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:04:03.676885  263091 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:04:03.800541  263091 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:04:03.800689  263091 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:01.983664  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:04:02.003111  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:04:02.022667  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:04:02.041627  266278 provision.go:87] duration metric: took 243.280062ms to configureAuth
	I1216 03:04:02.041664  266278 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:04:02.041884  266278 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:04:02.042032  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.063325  266278 main.go:143] libmachine: Using SSH client type: native
	I1216 03:04:02.063612  266278 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1216 03:04:02.063643  266278 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:04:02.362962  266278 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:04:02.362988  266278 machine.go:97] duration metric: took 4.050745794s to provisionDockerMachine
	I1216 03:04:02.362998  266278 client.go:176] duration metric: took 5.190713111s to LocalClient.Create
	I1216 03:04:02.363018  266278 start.go:167] duration metric: took 5.190812798s to libmachine.API.Create "no-preload-307185"
	I1216 03:04:02.363028  266278 start.go:293] postStartSetup for "no-preload-307185" (driver="docker")
	I1216 03:04:02.363043  266278 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:04:02.363102  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:04:02.363150  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.384989  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.490391  266278 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:04:02.494223  266278 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:04:02.494252  266278 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:04:02.494263  266278 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:04:02.494324  266278 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:04:02.494420  266278 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:04:02.494543  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:04:02.503099  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:02.524353  266278 start.go:296] duration metric: took 161.309598ms for postStartSetup
	I1216 03:04:02.524734  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:02.546606  266278 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/config.json ...
	I1216 03:04:02.546934  266278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:04:02.546975  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.567006  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.664959  266278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:04:02.669805  266278 start.go:128] duration metric: took 5.500351228s to createHost
	I1216 03:04:02.669847  266278 start.go:83] releasing machines lock for "no-preload-307185", held for 5.500531479s
	I1216 03:04:02.669912  266278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307185
	I1216 03:04:02.691505  266278 ssh_runner.go:195] Run: cat /version.json
	I1216 03:04:02.691557  266278 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:04:02.691576  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.691641  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:02.712618  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.713291  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:02.868756  266278 ssh_runner.go:195] Run: systemctl --version
	I1216 03:04:02.876448  266278 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:04:02.915259  266278 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:04:02.920342  266278 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:04:02.920421  266278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:04:02.951095  266278 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:04:02.951115  266278 start.go:496] detecting cgroup driver to use...
	I1216 03:04:02.951156  266278 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:04:02.951205  266278 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:04:02.968904  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:04:02.981039  266278 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:04:02.981094  266278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:04:02.998032  266278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:04:03.016854  266278 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:04:03.098050  266278 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:04:03.195156  266278 docker.go:234] disabling docker service ...
	I1216 03:04:03.195229  266278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:04:03.216952  266278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:04:03.231602  266278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:04:03.334295  266278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:04:03.416425  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:04:03.428983  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:04:03.443622  266278 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:04:03.443684  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.453618  266278 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:04:03.453686  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.462440  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.470809  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.479005  266278 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:04:03.487343  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.496270  266278 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.509284  266278 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:04:03.517739  266278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:04:03.525047  266278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:04:03.533369  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:03.620574  266278 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:04:03.762684  266278 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:04:03.762754  266278 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:04:03.766975  266278 start.go:564] Will wait 60s for crictl version
	I1216 03:04:03.767035  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:03.771264  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:04:03.795139  266278 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:04:03.795225  266278 ssh_runner.go:195] Run: crio --version
	I1216 03:04:03.823118  266278 ssh_runner.go:195] Run: crio --version
	I1216 03:04:03.852583  266278 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 03:04:04.032313  263091 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:04.299675  263091 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:04.372304  263091 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:04.492210  263091 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:04.493444  263091 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:04.498611  263091 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:03.853753  266278 cli_runner.go:164] Run: docker network inspect no-preload-307185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:04:03.872620  266278 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 03:04:03.876915  266278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:03.887438  266278 kubeadm.go:884] updating cluster {Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:04:03.887555  266278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:04:03.887598  266278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:04:03.914376  266278 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 03:04:03.914396  266278 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:04:03.914471  266278 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:03.914485  266278 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:03.914506  266278 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:03.914519  266278 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1216 03:04:03.914507  266278 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:03.914542  266278 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:03.914490  266278 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:03.914586  266278 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:03.915772  266278 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:03.915775  266278 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:03.915777  266278 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:03.915777  266278 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:03.915847  266278 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:03.915854  266278 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:03.915774  266278 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:03.915802  266278 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1216 03:04:04.035591  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.035786  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.036396  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.043180  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.044017  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.057996  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1216 03:04:04.110145  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.129270  266278 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1216 03:04:04.129337  266278 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.129351  266278 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1216 03:04:04.129391  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129402  266278 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1216 03:04:04.129421  266278 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.129429  266278 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1216 03:04:04.129447  266278 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.129452  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129461  266278 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1216 03:04:04.129494  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129514  266278 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1216 03:04:04.129389  266278 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.129542  266278 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1216 03:04:04.129496  266278 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.129593  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129604  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.129627  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.145126  266278 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1216 03:04:04.145170  266278 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.145170  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.145207  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.145213  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.145240  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.145273  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.145313  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.145326  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.183011  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.183041  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.183044  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.183097  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.186071  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.186141  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.186187  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.221175  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:04:04.223778  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.223939  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:04:04.224304  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:04:04.254705  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:04:04.254725  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:04:04.254731  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.254751  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:04:04.254767  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:04:04.254812  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.254864  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:04.254898  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:04:04.255381  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:04.255450  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:04.286435  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:04.286473  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1216 03:04:04.286513  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1216 03:04:04.286543  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:04.286556  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:04.286475  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.286580  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.286582  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1216 03:04:04.289665  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1216 03:04:04.289690  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1216 03:04:04.289735  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:04:04.289797  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:04.289872  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.289889  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1216 03:04:04.292897  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1216 03:04:04.292927  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1216 03:04:04.293218  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.293259  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1216 03:04:04.303848  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1216 03:04:04.303884  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1216 03:04:04.327062  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1216 03:04:04.327097  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1216 03:04:04.434191  266278 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.434263  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1216 03:04:04.876508  266278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:04.919490  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1216 03:04:04.919531  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.919564  266278 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 03:04:04.919601  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:04:04.919605  266278 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:04.919649  266278 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.171069  266278 ssh_runner.go:235] Completed: which crictl: (1.251397441s)
	I1216 03:04:06.171119  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.251492887s)
	I1216 03:04:06.171140  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:06.171152  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1216 03:04:06.171181  266278 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:06.171231  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:04:02.940916  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:02.940942  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:02.995151  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:02.995180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:03.113309  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:03.113344  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:03.187967  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:03.187994  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:03.188013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:03.231777  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:03.231809  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:03.287675  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:03.287881  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:05.830947  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:05.831466  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:05.831537  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:05.831601  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:05.880129  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:05.880153  224341 cri.go:89] found id: ""
	I1216 03:04:05.880163  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:05.880217  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:05.886405  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:05.886498  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:05.938074  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:05.938105  224341 cri.go:89] found id: ""
	I1216 03:04:05.938116  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:05.938181  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:05.942956  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:05.943016  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:05.990225  224341 cri.go:89] found id: ""
	I1216 03:04:05.990258  224341 logs.go:282] 0 containers: []
	W1216 03:04:05.990270  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:05.990279  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:05.990337  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:06.032680  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:06.032712  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:06.032719  224341 cri.go:89] found id: ""
	I1216 03:04:06.032733  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:06.032794  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.037967  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.042517  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:06.042580  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:06.090023  224341 cri.go:89] found id: ""
	I1216 03:04:06.090048  224341 logs.go:282] 0 containers: []
	W1216 03:04:06.090059  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:06.090066  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:06.090137  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:06.138405  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:06.138429  224341 cri.go:89] found id: ""
	I1216 03:04:06.138439  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:06.138496  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.143153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:06.143216  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:06.184367  224341 cri.go:89] found id: ""
	I1216 03:04:06.184398  224341 logs.go:282] 0 containers: []
	W1216 03:04:06.184408  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:06.184421  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:06.184483  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:06.232665  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:06.232689  224341 cri.go:89] found id: ""
	I1216 03:04:06.232698  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:06.232754  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:06.237557  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:06.237580  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:06.361672  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:06.361707  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:06.428251  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:06.428288  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:06.490628  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:06.490661  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:06.564274  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:06.564304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:06.583599  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:06.583629  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:06.653831  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:06.653866  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:06.653893  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:06.698353  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:06.698383  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:06.794116  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:06.794151  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:06.840856  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:06.840891  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:06.889217  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:06.889251  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:04.755242  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:04.755653  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:04.755714  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:04.755768  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:04.785233  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:04.785257  233647 cri.go:89] found id: ""
	I1216 03:04:04.785270  233647 logs.go:282] 1 containers: [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:04.785336  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.789247  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:04.789303  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:04.815637  233647 cri.go:89] found id: ""
	I1216 03:04:04.815666  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.815677  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:04.815687  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:04.815755  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:04.845854  233647 cri.go:89] found id: ""
	I1216 03:04:04.845883  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.845894  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:04.845902  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:04.845960  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:04.875863  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:04.875886  233647 cri.go:89] found id: ""
	I1216 03:04:04.875895  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:04.875960  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.880407  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:04.880477  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:04.910399  233647 cri.go:89] found id: ""
	I1216 03:04:04.910426  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.910436  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:04.910444  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:04.910496  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:04.939433  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:04.939456  233647 cri.go:89] found id: ""
	I1216 03:04:04.939466  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:04.939519  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:04.944067  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:04.944135  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:04.973700  233647 cri.go:89] found id: ""
	I1216 03:04:04.973728  233647 logs.go:282] 0 containers: []
	W1216 03:04:04.973739  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:04.973746  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:04.973806  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:05.003004  233647 cri.go:89] found id: ""
	I1216 03:04:05.003033  233647 logs.go:282] 0 containers: []
	W1216 03:04:05.003045  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:05.003058  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:05.003074  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:05.016956  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:05.016984  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:05.072462  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:05.072483  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:05.072498  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:05.107385  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:05.107426  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:05.138573  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:05.138606  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:05.165268  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:05.165293  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:05.219542  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:05.219571  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:05.249525  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:05.249550  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:07.841896  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:04.500288  263091 out.go:252]   - Booting up control plane ...
	I1216 03:04:04.500430  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:04.500534  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:04.501573  263091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:04.522152  263091 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:04.523289  263091 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:04.523363  263091 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:04.678458  263091 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 03:04:09.180524  263091 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502212 seconds
	I1216 03:04:09.180751  263091 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:04:09.192727  263091 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:04:09.715333  263091 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:04:09.715610  263091 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-073001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:04:10.228888  263091 kubeadm.go:319] [bootstrap-token] Using token: srwvus.woqadb8emztifzee
	I1216 03:04:10.233941  263091 out.go:252]   - Configuring RBAC rules ...
	I1216 03:04:10.234089  263091 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:04:10.235385  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:04:10.242481  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:04:10.247236  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:04:10.250034  263091 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:04:10.253571  263091 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:04:10.264298  263091 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:04:10.499294  263091 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:04:10.640713  263091 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:04:10.641879  263091 kubeadm.go:319] 
	I1216 03:04:10.641973  263091 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:04:10.641983  263091 kubeadm.go:319] 
	I1216 03:04:10.642076  263091 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:04:10.642087  263091 kubeadm.go:319] 
	I1216 03:04:10.642113  263091 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:04:10.642183  263091 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:04:10.642273  263091 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:04:10.642312  263091 kubeadm.go:319] 
	I1216 03:04:10.642412  263091 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:04:10.642422  263091 kubeadm.go:319] 
	I1216 03:04:10.642491  263091 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:04:10.642499  263091 kubeadm.go:319] 
	I1216 03:04:10.642573  263091 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:04:10.642682  263091 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:04:10.642789  263091 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:04:10.642799  263091 kubeadm.go:319] 
	I1216 03:04:10.642954  263091 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:04:10.643065  263091 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:04:10.643074  263091 kubeadm.go:319] 
	I1216 03:04:10.643216  263091 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token srwvus.woqadb8emztifzee \
	I1216 03:04:10.643350  263091 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:04:10.643385  263091 kubeadm.go:319] 	--control-plane 
	I1216 03:04:10.643396  263091 kubeadm.go:319] 
	I1216 03:04:10.643523  263091 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:04:10.643532  263091 kubeadm.go:319] 
	I1216 03:04:10.643632  263091 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token srwvus.woqadb8emztifzee \
	I1216 03:04:10.643758  263091 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:04:10.646150  263091 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:10.646335  263091 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:10.646356  263091 cni.go:84] Creating CNI manager for ""
	I1216 03:04:10.646364  263091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:10.647742  263091 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:04:07.661053  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.489799806s)
	I1216 03:04:07.661080  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1216 03:04:07.661103  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:07.661106  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.489944653s)
	I1216 03:04:07.661168  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:04:07.661171  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:08.867203  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.205931929s)
	I1216 03:04:08.867240  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.206038199s)
	I1216 03:04:08.867262  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1216 03:04:08.867288  266278 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:08.867335  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:04:08.867291  266278 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:10.191168  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.323808142s)
	I1216 03:04:10.191196  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1216 03:04:10.191214  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:10.191227  266278 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.323831368s)
	I1216 03:04:10.191258  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:04:10.191273  266278 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:04:10.191367  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:11.480218  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.28893578s)
	I1216 03:04:11.480249  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1216 03:04:11.480273  266278 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:11.480321  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:04:11.480338  266278 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.288951678s)
	I1216 03:04:11.480368  266278 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 03:04:11.480395  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1216 03:04:09.438608  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:09.439033  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:09.439085  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:09.439138  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:09.491099  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:09.491234  224341 cri.go:89] found id: ""
	I1216 03:04:09.491263  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:09.491344  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.496741  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:09.496964  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:09.537646  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:09.537669  224341 cri.go:89] found id: ""
	I1216 03:04:09.537679  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:09.537734  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.542422  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:09.542509  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:09.584631  224341 cri.go:89] found id: ""
	I1216 03:04:09.584661  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.584671  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:09.584682  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:09.584737  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:09.621992  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:09.622025  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:09.622030  224341 cri.go:89] found id: ""
	I1216 03:04:09.622038  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:09.622090  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.626706  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.630966  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:09.631028  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:09.668513  224341 cri.go:89] found id: ""
	I1216 03:04:09.668545  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.668559  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:09.668567  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:09.668621  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:09.712724  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:09.712757  224341 cri.go:89] found id: ""
	I1216 03:04:09.712765  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:09.712838  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.717834  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:09.717902  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:09.755792  224341 cri.go:89] found id: ""
	I1216 03:04:09.755839  224341 logs.go:282] 0 containers: []
	W1216 03:04:09.755851  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:09.755859  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:09.755921  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:09.792080  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:09.792107  224341 cri.go:89] found id: ""
	I1216 03:04:09.792119  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:09.792180  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:09.796182  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:09.796209  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:09.834786  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:09.834857  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:09.884561  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:09.884594  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:09.924698  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:09.924732  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:09.966756  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:09.966788  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:10.023237  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:10.023267  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:10.136733  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:10.136763  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:10.185983  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:10.186013  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:10.272968  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:10.272995  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:10.322863  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:10.322918  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:10.338767  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:10.338795  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:10.398927  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:12.899876  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:12.900306  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:12.900364  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:12.900424  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:12.846642  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:04:12.846696  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:12.846749  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:12.878486  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:12.878503  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:12.878507  233647 cri.go:89] found id: ""
	I1216 03:04:12.878514  233647 logs.go:282] 2 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:12.878564  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.883115  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.886731  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:12.886783  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:12.914967  233647 cri.go:89] found id: ""
	I1216 03:04:12.914993  233647 logs.go:282] 0 containers: []
	W1216 03:04:12.915004  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:12.915011  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:12.915081  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:12.941248  233647 cri.go:89] found id: ""
	I1216 03:04:12.941275  233647 logs.go:282] 0 containers: []
	W1216 03:04:12.941288  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:12.941296  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:12.941354  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:12.970514  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:12.970536  233647 cri.go:89] found id: ""
	I1216 03:04:12.970545  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:12.970594  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.974652  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:12.974719  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:13.003021  233647 cri.go:89] found id: ""
	I1216 03:04:13.003045  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.003056  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:13.003064  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:13.003122  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:13.032068  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:13.032091  233647 cri.go:89] found id: ""
	I1216 03:04:13.032101  233647 logs.go:282] 1 containers: [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:13.032163  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.036326  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:13.036387  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:13.065149  233647 cri.go:89] found id: ""
	I1216 03:04:13.065186  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.065195  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:13.065202  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:13.065257  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:13.099208  233647 cri.go:89] found id: ""
	I1216 03:04:13.099234  233647 logs.go:282] 0 containers: []
	W1216 03:04:13.099245  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:13.099264  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:13.099278  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:10.649413  263091 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:04:10.654390  263091 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1216 03:04:10.654411  263091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:04:10.671034  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:04:11.460915  263091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:04:11.460994  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:11.461016  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-073001 minikube.k8s.io/updated_at=2025_12_16T03_04_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=old-k8s-version-073001 minikube.k8s.io/primary=true
	I1216 03:04:11.471518  263091 ops.go:34] apiserver oom_adj: -16
	I1216 03:04:11.545111  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.045339  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.546085  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:13.045258  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:13.546024  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:12.854610  266278 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.374266272s)
	I1216 03:04:12.854646  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1216 03:04:12.854673  266278 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:12.854719  266278 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:04:13.516876  266278 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-5058/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 03:04:13.516943  266278 cache_images.go:125] Successfully loaded all cached images
	I1216 03:04:13.516954  266278 cache_images.go:94] duration metric: took 9.602542369s to LoadCachedImages
	I1216 03:04:13.516970  266278 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 03:04:13.517082  266278 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:04:13.517191  266278 ssh_runner.go:195] Run: crio config
	I1216 03:04:13.574164  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:04:13.574184  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:13.574198  266278 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:04:13.574226  266278 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307185 NodeName:no-preload-307185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:04:13.574408  266278 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:04:13.574495  266278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:04:13.583770  266278 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1216 03:04:13.583848  266278 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:04:13.592159  266278 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1216 03:04:13.592259  266278 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1216 03:04:13.592275  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1216 03:04:13.592439  266278 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1216 03:04:13.596606  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1216 03:04:13.596628  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1216 03:04:14.551399  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:14.565793  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1216 03:04:14.570370  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1216 03:04:14.570406  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1216 03:04:14.730778  266278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1216 03:04:14.734841  266278 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1216 03:04:14.734875  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1216 03:04:14.902938  266278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:04:14.911139  266278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 03:04:14.924210  266278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:04:15.027557  266278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1216 03:04:15.040925  266278 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:04:15.044751  266278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:04:15.113447  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:15.193177  266278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:15.222572  266278 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185 for IP: 192.168.94.2
	I1216 03:04:15.222592  266278 certs.go:195] generating shared ca certs ...
	I1216 03:04:15.222606  266278 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.222767  266278 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:04:15.222810  266278 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:04:15.222846  266278 certs.go:257] generating profile certs ...
	I1216 03:04:15.222923  266278 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key
	I1216 03:04:15.222936  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt with IP's: []
	I1216 03:04:15.239804  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt ...
	I1216 03:04:15.239839  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt: {Name:mkbb1d9d6d674b7216f912d7f18b1921d34f7eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.240043  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key ...
	I1216 03:04:15.240061  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.key: {Name:mk1f823c374a6d2710b2ec138116bfc954bf1945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.240186  266278 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a
	I1216 03:04:15.240203  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 03:04:15.257410  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a ...
	I1216 03:04:15.257433  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a: {Name:mk355e0be250ac1cc67932cde908b24fd54a0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.257604  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a ...
	I1216 03:04:15.257620  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a: {Name:mkdd92510da8a63f303809f61444ea12cd95af40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.257726  266278 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt.ca4e474a -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt
	I1216 03:04:15.257833  266278 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key.ca4e474a -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key
	I1216 03:04:15.257940  266278 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key
	I1216 03:04:15.257958  266278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt with IP's: []
	I1216 03:04:15.347262  266278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt ...
	I1216 03:04:15.347294  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt: {Name:mke789521dd6396d588cece41e1ec6a2655c1c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.347489  266278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key ...
	I1216 03:04:15.347506  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key: {Name:mke97c7d39a77521fb29839f489063e708457adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:15.347711  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:04:15.347751  266278 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:04:15.347761  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:04:15.347795  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:04:15.347837  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:04:15.347868  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:04:15.347924  266278 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:04:15.348673  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:04:15.367358  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:04:15.385345  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:04:15.403082  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:04:15.420424  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:04:15.438514  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:04:15.456421  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:04:15.474062  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:04:15.490934  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:04:15.511131  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:04:15.529088  266278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:04:15.546922  266278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:04:15.559505  266278 ssh_runner.go:195] Run: openssl version
	I1216 03:04:15.566808  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.574889  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:04:15.583165  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.587118  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.587168  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:04:15.627679  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:15.635693  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:04:15.643484  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.650955  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:04:15.658185  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.662078  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.662126  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:04:15.698607  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:04:15.706721  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:04:15.714158  266278 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.721769  266278 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:04:15.729188  266278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.732962  266278 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.733014  266278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:04:15.767296  266278 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:04:15.775556  266278 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:04:15.783887  266278 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:04:15.787609  266278 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:04:15.787673  266278 kubeadm.go:401] StartCluster: {Name:no-preload-307185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:04:15.787748  266278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:04:15.787865  266278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:04:15.814912  266278 cri.go:89] found id: ""
	I1216 03:04:15.814989  266278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:04:15.823123  266278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:15.831704  266278 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:15.831756  266278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:15.839707  266278 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:15.839728  266278 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:15.839763  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:15.847768  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:15.847843  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:15.854954  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:15.862885  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:15.862935  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:15.870270  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:15.878665  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:15.878715  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:15.886994  266278 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:15.895104  266278 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:15.895158  266278 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:15.902577  266278 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:16.014568  266278 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:16.071084  266278 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:12.947630  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:12.947651  224341 cri.go:89] found id: ""
	I1216 03:04:12.947660  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:12.947718  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.951840  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:12.951912  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:12.989252  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:12.989270  224341 cri.go:89] found id: ""
	I1216 03:04:12.989277  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:12.989321  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:12.993130  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:12.993209  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:13.032791  224341 cri.go:89] found id: ""
	I1216 03:04:13.032815  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.032854  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:13.032868  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:13.032917  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:13.072341  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:13.072367  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:13.072373  224341 cri.go:89] found id: ""
	I1216 03:04:13.072382  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:13.072438  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.077091  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.080882  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:13.080954  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:13.125433  224341 cri.go:89] found id: ""
	I1216 03:04:13.125462  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.125474  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:13.125490  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:13.125554  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:13.170369  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:13.170392  224341 cri.go:89] found id: ""
	I1216 03:04:13.170400  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:13.170448  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.174306  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:13.174370  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:13.217698  224341 cri.go:89] found id: ""
	I1216 03:04:13.217733  224341 logs.go:282] 0 containers: []
	W1216 03:04:13.217745  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:13.217753  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:13.217813  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:13.262721  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:13.262744  224341 cri.go:89] found id: ""
	I1216 03:04:13.262754  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:13.262837  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:13.267184  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:13.267211  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:13.334460  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:13.334482  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:13.334496  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:13.390840  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:13.390869  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:13.485577  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:13.485611  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:13.531410  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:13.531439  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:13.574060  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:13.574089  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:13.619699  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:13.619726  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:13.701623  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:13.701657  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:13.751906  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:13.751936  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:13.857738  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:13.857767  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:13.874169  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:13.874195  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.415876  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:16.416343  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:16.416403  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:16.416459  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:16.457702  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.457724  224341 cri.go:89] found id: ""
	I1216 03:04:16.457733  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:16.457785  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.461679  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:16.461751  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:16.495975  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:16.495995  224341 cri.go:89] found id: ""
	I1216 03:04:16.496002  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:16.496049  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.499688  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:16.499745  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:16.534113  224341 cri.go:89] found id: ""
	I1216 03:04:16.534137  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.534147  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:16.534153  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:16.534201  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:16.569189  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:16.569216  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:16.569222  224341 cri.go:89] found id: ""
	I1216 03:04:16.569231  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:16.569300  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.573304  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.577186  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:16.577251  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:16.614893  224341 cri.go:89] found id: ""
	I1216 03:04:16.614925  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.614936  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:16.614943  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:16.615001  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:16.650342  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:16.650361  224341 cri.go:89] found id: ""
	I1216 03:04:16.650368  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:16.650427  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.654321  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:16.654379  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:16.687411  224341 cri.go:89] found id: ""
	I1216 03:04:16.687438  224341 logs.go:282] 0 containers: []
	W1216 03:04:16.687446  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:16.687452  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:16.687508  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:16.727021  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:16.727044  224341 cri.go:89] found id: ""
	I1216 03:04:16.727053  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:16.727102  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:16.730811  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:16.730847  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:16.826431  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:16.826461  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:16.843225  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:16.843258  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:16.914349  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:16.914367  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:16.914384  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:16.951389  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:16.951415  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:16.997956  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:16.997988  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:17.031585  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:17.031610  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:17.073072  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:17.073099  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:17.155148  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:17.155180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:17.200150  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:17.200178  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:17.236104  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:17.236131  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:13.197046  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:13.197087  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:04:14.045978  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:14.545782  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:15.046061  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:15.545175  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:16.045445  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:16.546270  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:17.045249  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:17.546009  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:18.045466  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:18.546059  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:19.790165  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:19.790613  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:19.790671  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:19.790722  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:19.836240  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:19.836266  224341 cri.go:89] found id: ""
	I1216 03:04:19.836276  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:19.836333  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.840180  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:19.840256  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:19.876262  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:19.876282  224341 cri.go:89] found id: ""
	I1216 03:04:19.876291  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:19.876351  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.880702  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:19.880761  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:19.931372  224341 cri.go:89] found id: ""
	I1216 03:04:19.931400  224341 logs.go:282] 0 containers: []
	W1216 03:04:19.931411  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:19.931539  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:19.931639  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:19.981968  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:19.981994  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:19.982001  224341 cri.go:89] found id: ""
	I1216 03:04:19.982011  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:19.982058  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.985944  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:19.989995  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:19.990053  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:20.045000  224341 cri.go:89] found id: ""
	I1216 03:04:20.045029  224341 logs.go:282] 0 containers: []
	W1216 03:04:20.045038  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:20.045045  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:20.045118  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:20.087685  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:20.087710  224341 cri.go:89] found id: ""
	I1216 03:04:20.087721  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:20.087774  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:20.092446  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:20.092528  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:20.145165  224341 cri.go:89] found id: ""
	I1216 03:04:20.145190  224341 logs.go:282] 0 containers: []
	W1216 03:04:20.145203  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:20.145211  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:20.145270  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:20.190416  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:20.190442  224341 cri.go:89] found id: ""
	I1216 03:04:20.190453  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:20.190512  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:20.194873  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:20.194895  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:20.267295  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:20.267325  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:20.267337  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:20.305052  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:20.305083  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:20.353657  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:20.353689  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:20.433463  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:20.433494  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:20.475122  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:20.475157  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:20.510661  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:20.510690  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:20.544700  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:20.544722  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:20.589377  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:20.589405  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:20.688706  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:20.688736  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:20.705015  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:20.705040  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.602639  266278 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:23.602712  266278 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:23.602904  266278 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:23.603002  266278 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:04:23.603067  266278 kubeadm.go:319] OS: Linux
	I1216 03:04:23.603145  266278 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:23.603200  266278 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:23.603282  266278 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:23.603357  266278 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:23.603443  266278 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:23.603520  266278 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:23.603597  266278 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:23.603668  266278 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:04:23.603769  266278 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:23.603949  266278 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:23.604068  266278 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:23.604154  266278 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:23.606102  266278 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:23.606220  266278 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:23.606333  266278 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:23.606428  266278 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:04:23.606513  266278 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:04:23.606598  266278 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:04:23.606666  266278 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:04:23.606756  266278 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:04:23.606949  266278 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307185] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:04:23.607032  266278 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:04:23.607201  266278 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307185] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:04:23.607294  266278 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:04:23.607382  266278 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:04:23.607446  266278 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:04:23.607524  266278 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:23.607598  266278 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:23.607698  266278 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:23.607803  266278 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:23.607932  266278 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:23.608010  266278 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:23.608111  266278 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:23.608198  266278 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:23.609656  266278 out.go:252]   - Booting up control plane ...
	I1216 03:04:23.609777  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:23.609894  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:23.610004  266278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:23.610148  266278 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:23.610280  266278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:23.610427  266278 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:23.610538  266278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:23.610603  266278 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:23.610751  266278 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:23.610921  266278 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:23.611024  266278 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.330244ms
	I1216 03:04:23.611184  266278 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:04:23.611300  266278 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 03:04:23.611386  266278 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:04:23.611490  266278 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:04:23.611617  266278 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004939922s
	I1216 03:04:23.611724  266278 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.866682862s
	I1216 03:04:23.611834  266278 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001281569s
	I1216 03:04:23.611970  266278 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:04:23.612156  266278 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:04:23.612252  266278 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:04:23.612533  266278 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-307185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:04:23.612619  266278 kubeadm.go:319] [bootstrap-token] Using token: 9g2v5j.7sk8fy8x333gc5hf
	I1216 03:04:23.614196  266278 out.go:252]   - Configuring RBAC rules ...
	I1216 03:04:23.614321  266278 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:04:23.614437  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:04:23.614653  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:04:23.614869  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:04:23.615042  266278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:04:23.615171  266278 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:04:23.615326  266278 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:04:23.615393  266278 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:04:23.615455  266278 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:04:23.615462  266278 kubeadm.go:319] 
	I1216 03:04:23.615542  266278 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:04:23.615549  266278 kubeadm.go:319] 
	I1216 03:04:23.615656  266278 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:04:23.615674  266278 kubeadm.go:319] 
	I1216 03:04:23.615712  266278 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:04:23.615797  266278 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:04:23.615868  266278 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:04:23.615877  266278 kubeadm.go:319] 
	I1216 03:04:23.615950  266278 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:04:23.615958  266278 kubeadm.go:319] 
	I1216 03:04:23.616136  266278 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:04:23.616161  266278 kubeadm.go:319] 
	I1216 03:04:23.616231  266278 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:04:23.616354  266278 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:04:23.616452  266278 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:04:23.616459  266278 kubeadm.go:319] 
	I1216 03:04:23.616565  266278 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:04:23.616666  266278 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:04:23.616673  266278 kubeadm.go:319] 
	I1216 03:04:23.616781  266278 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9g2v5j.7sk8fy8x333gc5hf \
	I1216 03:04:23.616920  266278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:04:23.616948  266278 kubeadm.go:319] 	--control-plane 
	I1216 03:04:23.616955  266278 kubeadm.go:319] 
	I1216 03:04:23.617062  266278 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:04:23.617068  266278 kubeadm.go:319] 
	I1216 03:04:23.617176  266278 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9g2v5j.7sk8fy8x333gc5hf \
	I1216 03:04:23.617313  266278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:04:23.617330  266278 cni.go:84] Creating CNI manager for ""
	I1216 03:04:23.617340  266278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:23.619028  266278 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:04:19.045699  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:19.546094  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:20.046043  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:20.546036  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:21.045497  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:21.545741  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:22.045475  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:22.545535  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.046011  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.545717  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.045308  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.545485  263091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.623099  263091 kubeadm.go:1114] duration metric: took 13.162156759s to wait for elevateKubeSystemPrivileges
	I1216 03:04:24.623139  263091 kubeadm.go:403] duration metric: took 22.589611877s to StartCluster
	I1216 03:04:24.623156  263091 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:24.623246  263091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:04:24.624669  263091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:24.624956  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:04:24.624949  263091 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:04:24.624979  263091 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:04:24.625052  263091 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-073001"
	I1216 03:04:24.625064  263091 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-073001"
	I1216 03:04:24.625073  263091 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-073001"
	I1216 03:04:24.625081  263091 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-073001"
	I1216 03:04:24.625104  263091 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:04:24.625134  263091 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:04:24.625495  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.625638  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.627386  263091 out.go:179] * Verifying Kubernetes components...
	I1216 03:04:24.628795  263091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:24.651412  263091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:24.652015  263091 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-073001"
	I1216 03:04:24.652059  263091 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:04:24.652552  263091 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:04:24.653262  263091 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:24.653284  263091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:04:24.653335  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:04:24.680112  263091 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:24.680146  263091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:04:24.680222  263091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:04:24.686017  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:04:24.708030  263091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:04:24.746357  263091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:04:24.826483  263091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:24.828208  263091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:24.868724  263091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:25.091362  263091 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 03:04:25.352085  263091 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-073001" to be "Ready" ...
	I1216 03:04:25.359981  263091 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:04:23.620503  266278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:04:23.626089  266278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 03:04:23.626112  266278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:04:23.640346  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:04:23.887788  266278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:04:23.888050  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.888268  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-307185 minikube.k8s.io/updated_at=2025_12_16T03_04_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=no-preload-307185 minikube.k8s.io/primary=true
	I1216 03:04:23.900446  266278 ops.go:34] apiserver oom_adj: -16
	I1216 03:04:23.971454  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.472485  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:24.971793  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:25.472038  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:25.971985  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:26.472452  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:26.971792  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:23.259983  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:23.260507  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:23.260566  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:23.260626  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:23.300479  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:23.300508  224341 cri.go:89] found id: ""
	I1216 03:04:23.300519  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:23.300581  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.304563  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:23.304630  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:23.343023  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:23.343041  224341 cri.go:89] found id: ""
	I1216 03:04:23.343049  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:23.343095  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.347187  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:23.347258  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:23.386143  224341 cri.go:89] found id: ""
	I1216 03:04:23.386167  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.386175  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:23.386181  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:23.386233  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:23.435373  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:23.435401  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:23.435407  224341 cri.go:89] found id: ""
	I1216 03:04:23.435435  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:23.435497  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.440354  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.444807  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:23.444887  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:23.482204  224341 cri.go:89] found id: ""
	I1216 03:04:23.482232  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.482243  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:23.482250  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:23.482310  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:23.521654  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:23.521679  224341 cri.go:89] found id: ""
	I1216 03:04:23.521689  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:23.521748  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.526131  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:23.526197  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:23.562798  224341 cri.go:89] found id: ""
	I1216 03:04:23.562833  224341 logs.go:282] 0 containers: []
	W1216 03:04:23.562844  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:23.562851  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:23.562912  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:23.601185  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:23.601601  224341 cri.go:89] found id: ""
	I1216 03:04:23.601635  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:23.601718  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:23.607333  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:23.607358  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:23.663321  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:23.663358  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:23.701580  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:23.701609  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.773301  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:23.773340  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:23.815567  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:23.815601  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:23.897998  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:23.898108  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:23.898128  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:23.948183  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:23.948223  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:23.989099  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:23.989135  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:24.104794  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:24.104844  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:24.126371  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:24.126588  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:24.180622  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:24.180650  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:26.772913  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:26.773363  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:26.773422  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:26.773483  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:26.810158  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:26.810181  224341 cri.go:89] found id: ""
	I1216 03:04:26.810188  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:26.810239  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.813907  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:26.813976  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:26.850152  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:26.850175  224341 cri.go:89] found id: ""
	I1216 03:04:26.850186  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:26.850240  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.855211  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:26.855284  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:26.893660  224341 cri.go:89] found id: ""
	I1216 03:04:26.893684  224341 logs.go:282] 0 containers: []
	W1216 03:04:26.893691  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:26.893697  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:26.893751  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:26.929607  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:26.929628  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:26.929632  224341 cri.go:89] found id: ""
	I1216 03:04:26.929639  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:26.929693  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.933638  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:26.937066  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:26.937125  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:26.972041  224341 cri.go:89] found id: ""
	I1216 03:04:26.972067  224341 logs.go:282] 0 containers: []
	W1216 03:04:26.972077  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:26.972085  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:26.972145  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:27.009502  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:27.009524  224341 cri.go:89] found id: ""
	I1216 03:04:27.009533  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:27.009589  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.013603  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:27.013658  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:27.052308  224341 cri.go:89] found id: ""
	I1216 03:04:27.052335  224341 logs.go:282] 0 containers: []
	W1216 03:04:27.052343  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:27.052348  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:27.052395  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:27.087498  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:27.087521  224341 cri.go:89] found id: ""
	I1216 03:04:27.087528  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:27.087584  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.091486  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:27.091506  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:27.135610  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:27.135637  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:27.171056  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:27.171085  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:27.208768  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:27.208798  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:27.247152  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:27.247180  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:27.293047  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:27.293076  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:27.369089  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:27.369119  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:27.403846  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:27.403883  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:27.457484  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:27.457516  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:27.578805  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:27.578852  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:27.596358  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:27.596385  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:27.666798  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:23.268377  233647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.071269575s)
	W1216 03:04:23.268417  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1216 03:04:23.268425  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:23.268436  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:23.302985  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:23.303010  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:23.336718  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:23.336756  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:23.352642  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:23.352674  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:23.386654  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:23.386686  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:23.422065  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:23.422098  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:23.489890  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:23.489919  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:26.024897  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:27.486938  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46014->192.168.76.2:8443: read: connection reset by peer
	I1216 03:04:27.487013  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:27.487066  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:27.519814  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:27.519861  233647 cri.go:89] found id: "f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	I1216 03:04:27.519867  233647 cri.go:89] found id: ""
	I1216 03:04:27.519876  233647 logs.go:282] 2 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]
	I1216 03:04:27.519933  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.524050  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.528435  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:27.528497  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:27.558960  233647 cri.go:89] found id: ""
	I1216 03:04:27.558988  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.559005  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:27.559013  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:27.559067  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:27.588065  233647 cri.go:89] found id: ""
	I1216 03:04:27.588093  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.588104  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:27.588113  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:27.588170  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:27.616575  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:27.616599  233647 cri.go:89] found id: ""
	I1216 03:04:27.616610  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:27.616666  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.620915  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:27.620992  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:27.648038  233647 cri.go:89] found id: ""
	I1216 03:04:27.648066  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.648078  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:27.648086  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:27.648141  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:27.678473  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:27.678490  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:27.678499  233647 cri.go:89] found id: ""
	I1216 03:04:27.678506  233647 logs.go:282] 2 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:27.678561  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.682702  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:27.686697  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:27.686763  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:27.712886  233647 cri.go:89] found id: ""
	I1216 03:04:27.712909  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.712917  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:27.712922  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:27.712980  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:27.739275  233647 cri.go:89] found id: ""
	I1216 03:04:27.739376  233647 logs.go:282] 0 containers: []
	W1216 03:04:27.739416  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:27.739436  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:27.739499  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:27.806241  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:27.806270  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:27.837544  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:27.837575  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:27.857482  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:27.857520  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:27.914571  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:27.914592  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:27.914606  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:27.945517  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:27.945559  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:27.974105  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:27.974129  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:28.069246  233647 logs.go:123] Gathering logs for kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24] ...
	I1216 03:04:28.069282  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	W1216 03:04:28.096034  233647 logs.go:130] failed kube-apiserver [f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:04:28.093580    6013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist" containerID="f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	time="2025-12-16T03:04:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1216 03:04:28.093580    6013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist" containerID="f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24"
	time="2025-12-16T03:04:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24\": container with ID starting with f90290fa9e5bfc6099353bd63e0b3b2320c1f52959bf046b7e68294401d8ee24 not found: ID does not exist"
	
	** /stderr **
	I1216 03:04:28.096056  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:28.096070  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:28.121602  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:28.121628  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:25.361370  263091 addons.go:530] duration metric: took 736.392839ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:04:25.596297  263091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-073001" context rescaled to 1 replicas
	W1216 03:04:27.354776  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	I1216 03:04:27.472487  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:27.972086  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:28.472460  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:28.971539  266278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:29.042955  266278 kubeadm.go:1114] duration metric: took 5.155022486s to wait for elevateKubeSystemPrivileges
	I1216 03:04:29.043001  266278 kubeadm.go:403] duration metric: took 13.255332897s to StartCluster
	I1216 03:04:29.043025  266278 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:29.043093  266278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:04:29.044782  266278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:29.045043  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:04:29.045072  266278 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:04:29.045131  266278 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:04:29.045280  266278 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307185"
	I1216 03:04:29.045293  266278 addons.go:70] Setting default-storageclass=true in profile "no-preload-307185"
	I1216 03:04:29.045303  266278 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307185"
	I1216 03:04:29.045320  266278 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307185"
	I1216 03:04:29.045332  266278 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:04:29.045340  266278 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:04:29.045781  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.046084  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.046810  266278 out.go:179] * Verifying Kubernetes components...
	I1216 03:04:29.050310  266278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:29.072939  266278 addons.go:239] Setting addon default-storageclass=true in "no-preload-307185"
	I1216 03:04:29.072986  266278 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:04:29.073528  266278 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:04:29.076926  266278 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:29.077992  266278 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:29.078013  266278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:04:29.078085  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:29.100707  266278 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:29.100732  266278 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:04:29.100792  266278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:04:29.110463  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:29.127812  266278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:04:29.141913  266278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:04:29.206746  266278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:29.228149  266278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:29.238770  266278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:29.318526  266278 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:04:29.319557  266278 node_ready.go:35] waiting up to 6m0s for node "no-preload-307185" to be "Ready" ...
	I1216 03:04:29.596065  266278 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:04:29.596998  266278 addons.go:530] duration metric: took 551.868633ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:04:29.824054  266278 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-307185" context rescaled to 1 replicas
	W1216 03:04:31.323018  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:30.167261  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:30.167686  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:30.167751  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:30.167832  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:30.217161  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:30.217185  224341 cri.go:89] found id: ""
	I1216 03:04:30.217202  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:30.217257  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.221972  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:30.222039  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:30.260182  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:30.260200  224341 cri.go:89] found id: ""
	I1216 03:04:30.260207  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:30.260256  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.264295  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:30.264365  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:30.302968  224341 cri.go:89] found id: ""
	I1216 03:04:30.302994  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.303005  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:30.303012  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:30.303071  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:30.350482  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:30.350507  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:30.350514  224341 cri.go:89] found id: ""
	I1216 03:04:30.350524  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:30.350588  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.355582  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.360407  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:30.360479  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:30.408075  224341 cri.go:89] found id: ""
	I1216 03:04:30.408101  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.408112  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:30.408119  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:30.408179  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:30.456434  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:30.456461  224341 cri.go:89] found id: ""
	I1216 03:04:30.456472  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:30.456531  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.461724  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:30.461796  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:30.507639  224341 cri.go:89] found id: ""
	I1216 03:04:30.507665  224341 logs.go:282] 0 containers: []
	W1216 03:04:30.507675  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:30.507682  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:30.507743  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:30.557877  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:30.557906  224341 cri.go:89] found id: ""
	I1216 03:04:30.557916  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:30.557979  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.563070  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:30.563094  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:30.584259  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:30.584370  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:30.660261  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:30.660285  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:30.660304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:30.704598  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:30.704625  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:30.755793  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:30.755840  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:30.795341  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:30.795367  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:30.859451  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:30.859483  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:30.902740  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:30.902772  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:31.009266  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:31.009304  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:31.061285  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:31.061317  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:31.146467  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:31.146492  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:30.648909  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:30.649359  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:30.649425  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:30.649489  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:30.680129  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:30.680156  233647 cri.go:89] found id: ""
	I1216 03:04:30.680166  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:30.680277  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.684176  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:30.684242  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:30.714636  233647 cri.go:89] found id: ""
	I1216 03:04:30.714663  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.714674  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:30.714680  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:30.714724  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:30.744320  233647 cri.go:89] found id: ""
	I1216 03:04:30.744346  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.744357  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:30.744365  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:30.744411  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:30.775597  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:30.775618  233647 cri.go:89] found id: ""
	I1216 03:04:30.775628  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:30.775688  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.779894  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:30.779991  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:30.810482  233647 cri.go:89] found id: ""
	I1216 03:04:30.810505  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.810514  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:30.810520  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:30.810566  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:30.839730  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:30.839749  233647 cri.go:89] found id: "76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:30.839753  233647 cri.go:89] found id: ""
	I1216 03:04:30.839761  233647 logs.go:282] 2 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790]
	I1216 03:04:30.839833  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.843942  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:30.847643  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:30.847697  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:30.879330  233647 cri.go:89] found id: ""
	I1216 03:04:30.879359  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.879370  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:30.879378  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:30.879461  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:30.909702  233647 cri.go:89] found id: ""
	I1216 03:04:30.909727  233647 logs.go:282] 0 containers: []
	W1216 03:04:30.909737  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:30.909750  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:30.909760  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:30.993496  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:30.993532  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:31.052325  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:31.052342  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:31.052353  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:31.081434  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:31.081468  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:31.112091  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:31.112115  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:31.143865  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:31.143896  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:31.158929  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:31.158956  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:31.192495  233647 logs.go:123] Gathering logs for kube-controller-manager [76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790] ...
	I1216 03:04:31.192521  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76d0aef0563187dca3932d49cb458d6c73ccee2c62d84e1c40c4bea0e99e7790"
	I1216 03:04:31.219539  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:31.219563  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1216 03:04:29.356808  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:31.855954  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:33.822689  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	W1216 03:04:35.822902  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:33.683731  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:33.684182  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:33.684245  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:33.684313  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:33.719638  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:33.719660  224341 cri.go:89] found id: ""
	I1216 03:04:33.719668  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:33.719732  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.723564  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:33.723623  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:33.756396  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:33.756418  224341 cri.go:89] found id: ""
	I1216 03:04:33.756427  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:33.756485  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.760193  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:33.760241  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:33.794948  224341 cri.go:89] found id: ""
	I1216 03:04:33.794973  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.794983  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:33.794990  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:33.795054  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:33.831869  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:33.831888  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:33.831894  224341 cri.go:89] found id: ""
	I1216 03:04:33.831903  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:33.831966  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.836217  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.840689  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:33.840754  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:33.882263  224341 cri.go:89] found id: ""
	I1216 03:04:33.882287  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.882299  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:33.882306  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:33.882369  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:33.919801  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:33.919834  224341 cri.go:89] found id: ""
	I1216 03:04:33.919845  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:33.919912  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.923626  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:33.923676  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:33.960911  224341 cri.go:89] found id: ""
	I1216 03:04:33.960939  224341 logs.go:282] 0 containers: []
	W1216 03:04:33.960950  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:33.960958  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:33.961020  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:33.999211  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:33.999231  224341 cri.go:89] found id: ""
	I1216 03:04:33.999240  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:33.999335  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:34.003231  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:34.003252  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:34.063694  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:34.063732  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:34.168760  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:34.168798  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:34.187537  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:34.187567  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:34.261810  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:34.261842  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:34.261857  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:34.301375  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:34.301404  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:34.351962  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:34.351999  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:34.402382  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:34.402409  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:34.440734  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:34.440757  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:34.515640  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:34.515672  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:34.553729  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:34.553757  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.089427  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:37.089839  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:37.089910  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:37.089965  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:37.128978  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:37.129000  224341 cri.go:89] found id: ""
	I1216 03:04:37.129010  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:37.129064  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.133375  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:37.133446  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:37.174287  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:37.174313  224341 cri.go:89] found id: ""
	I1216 03:04:37.174323  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:37.174370  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.178662  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:37.178733  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:37.221546  224341 cri.go:89] found id: ""
	I1216 03:04:37.221567  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.221574  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:37.221579  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:37.221624  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:37.256908  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:37.256931  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:37.256940  224341 cri.go:89] found id: ""
	I1216 03:04:37.256951  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:37.257012  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.260770  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.264215  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:37.264273  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:37.308112  224341 cri.go:89] found id: ""
	I1216 03:04:37.308146  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.308158  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:37.308168  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:37.308291  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:37.355291  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:37.355314  224341 cri.go:89] found id: ""
	I1216 03:04:37.355324  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:37.355381  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.361033  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:37.361143  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:37.400370  224341 cri.go:89] found id: ""
	I1216 03:04:37.400393  224341 logs.go:282] 0 containers: []
	W1216 03:04:37.400402  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:37.400410  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:37.400469  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:37.436795  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.436828  224341 cri.go:89] found id: ""
	I1216 03:04:37.436839  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:37.436893  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.440984  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:37.441004  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:37.480346  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:37.480374  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:37.563172  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:37.563207  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:37.607766  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:37.607793  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:37.645038  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:37.645062  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:37.706961  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:37.706993  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:37.724013  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:37.724039  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:37.772027  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:37.772057  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:37.806697  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:37.806721  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:37.845698  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:37.845730  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:33.778977  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:33.779389  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:33.779443  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:33.779503  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:33.809015  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:33.809039  233647 cri.go:89] found id: ""
	I1216 03:04:33.809050  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:33.809108  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.813147  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:33.813220  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:33.843689  233647 cri.go:89] found id: ""
	I1216 03:04:33.843712  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.843720  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:33.843726  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:33.843766  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:33.874922  233647 cri.go:89] found id: ""
	I1216 03:04:33.874950  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.874962  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:33.874969  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:33.875030  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:33.904575  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:33.904598  233647 cri.go:89] found id: ""
	I1216 03:04:33.904606  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:33.904665  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.909588  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:33.909656  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:33.937449  233647 cri.go:89] found id: ""
	I1216 03:04:33.937474  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.937484  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:33.937491  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:33.937558  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:33.965216  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:33.965240  233647 cri.go:89] found id: ""
	I1216 03:04:33.965251  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:33.965313  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:33.969212  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:33.969265  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:33.997607  233647 cri.go:89] found id: ""
	I1216 03:04:33.997633  233647 logs.go:282] 0 containers: []
	W1216 03:04:33.997642  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:33.997648  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:33.997693  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:34.027141  233647 cri.go:89] found id: ""
	I1216 03:04:34.027168  233647 logs.go:282] 0 containers: []
	W1216 03:04:34.027178  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:34.027187  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:34.027203  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:34.054148  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:34.054178  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:34.083001  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:34.083029  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:34.144728  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:34.144779  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:34.179844  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:34.179879  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:34.287130  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:34.287162  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:34.304086  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:34.304118  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:34.363856  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:34.363905  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:34.363922  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:36.899991  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:36.900396  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:36.900450  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:36.900512  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:36.928846  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:36.928867  233647 cri.go:89] found id: ""
	I1216 03:04:36.928876  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:36.928933  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:36.932763  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:36.932812  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:36.961114  233647 cri.go:89] found id: ""
	I1216 03:04:36.961142  233647 logs.go:282] 0 containers: []
	W1216 03:04:36.961154  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:36.961161  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:36.961230  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:36.992750  233647 cri.go:89] found id: ""
	I1216 03:04:36.992771  233647 logs.go:282] 0 containers: []
	W1216 03:04:36.992780  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:36.992786  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:36.992854  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:37.020564  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:37.020586  233647 cri.go:89] found id: ""
	I1216 03:04:37.020594  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:37.020648  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.024746  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:37.024802  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:37.051149  233647 cri.go:89] found id: ""
	I1216 03:04:37.051170  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.051178  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:37.051186  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:37.051230  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:37.077572  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:37.077591  233647 cri.go:89] found id: ""
	I1216 03:04:37.077598  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:37.077651  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:37.081489  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:37.081539  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:37.110429  233647 cri.go:89] found id: ""
	I1216 03:04:37.110459  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.110473  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:37.110480  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:37.110533  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:37.140366  233647 cri.go:89] found id: ""
	I1216 03:04:37.140391  233647 logs.go:282] 0 containers: []
	W1216 03:04:37.140403  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:37.140414  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:37.140428  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:37.239331  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:37.239370  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:37.255575  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:37.255602  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:37.326926  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:37.326951  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:37.326968  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:37.370783  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:37.370808  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:37.398896  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:37.398925  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:37.425339  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:37.425363  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:37.489086  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:37.489120  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 03:04:34.355384  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	W1216 03:04:36.355669  263091 node_ready.go:57] node "old-k8s-version-073001" has "Ready":"False" status (will retry)
	I1216 03:04:37.356434  263091 node_ready.go:49] node "old-k8s-version-073001" is "Ready"
	I1216 03:04:37.356462  263091 node_ready.go:38] duration metric: took 12.004333871s for node "old-k8s-version-073001" to be "Ready" ...
	I1216 03:04:37.356480  263091 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:04:37.356528  263091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:04:37.370849  263091 api_server.go:72] duration metric: took 12.745793596s to wait for apiserver process to appear ...
	I1216 03:04:37.370869  263091 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:04:37.370897  263091 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 03:04:37.376057  263091 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 03:04:37.377242  263091 api_server.go:141] control plane version: v1.28.0
	I1216 03:04:37.377269  263091 api_server.go:131] duration metric: took 6.391967ms to wait for apiserver health ...
	I1216 03:04:37.377278  263091 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:04:37.381006  263091 system_pods.go:59] 8 kube-system pods found
	I1216 03:04:37.381043  263091 system_pods.go:61] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.381052  263091 system_pods.go:61] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.381060  263091 system_pods.go:61] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.381066  263091 system_pods.go:61] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.381071  263091 system_pods.go:61] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.381080  263091 system_pods.go:61] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.381086  263091 system_pods.go:61] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.381093  263091 system_pods.go:61] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.381108  263091 system_pods.go:74] duration metric: took 3.822929ms to wait for pod list to return data ...
	I1216 03:04:37.381122  263091 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:04:37.383128  263091 default_sa.go:45] found service account: "default"
	I1216 03:04:37.383147  263091 default_sa.go:55] duration metric: took 2.018975ms for default service account to be created ...
	I1216 03:04:37.383157  263091 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:04:37.387599  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:37.387632  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.387640  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.387648  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.387653  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.387659  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.387665  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.387671  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.387682  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.387717  263091 retry.go:31] will retry after 235.616732ms: missing components: kube-dns
	I1216 03:04:37.628167  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:37.628204  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:37.628212  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:37.628220  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:37.628226  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:37.628232  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:37.628237  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:37.628242  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:37.628251  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:37.628270  263091 retry.go:31] will retry after 382.482522ms: missing components: kube-dns
	I1216 03:04:38.015537  263091 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:38.015569  263091 system_pods.go:89] "coredns-5dd5756b68-8lk58" [d193df22-756a-429b-b218-48251e837115] Running
	I1216 03:04:38.015579  263091 system_pods.go:89] "etcd-old-k8s-version-073001" [8155fe61-f481-409f-b2be-7fbb3a8016ac] Running
	I1216 03:04:38.015585  263091 system_pods.go:89] "kindnet-8qgxg" [ea2fe4c6-f92f-4ebe-a5ab-8b88a452ba08] Running
	I1216 03:04:38.015590  263091 system_pods.go:89] "kube-apiserver-old-k8s-version-073001" [0c0e4ddc-e502-47a1-aa79-7eb045dcbb9a] Running
	I1216 03:04:38.015596  263091 system_pods.go:89] "kube-controller-manager-old-k8s-version-073001" [26d71a70-8e41-4b5b-892f-88a6fd3ad8e6] Running
	I1216 03:04:38.015601  263091 system_pods.go:89] "kube-proxy-mhxd9" [427da05c-6160-4d42-ae08-2c49bb47dcb1] Running
	I1216 03:04:38.015606  263091 system_pods.go:89] "kube-scheduler-old-k8s-version-073001" [8963b4f8-221f-49c3-a8d4-db1ad71e572d] Running
	I1216 03:04:38.015611  263091 system_pods.go:89] "storage-provisioner" [9bbfe39d-4b96-4d7b-a8d8-3f016c9ca786] Running
	I1216 03:04:38.015620  263091 system_pods.go:126] duration metric: took 632.456255ms to wait for k8s-apps to be running ...
	I1216 03:04:38.015633  263091 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:04:38.015681  263091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:38.029077  263091 system_svc.go:56] duration metric: took 13.436289ms WaitForService to wait for kubelet
	I1216 03:04:38.029102  263091 kubeadm.go:587] duration metric: took 13.404051181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:04:38.029124  263091 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:04:38.031756  263091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:04:38.031785  263091 node_conditions.go:123] node cpu capacity is 8
	I1216 03:04:38.031805  263091 node_conditions.go:105] duration metric: took 2.675128ms to run NodePressure ...
	I1216 03:04:38.031832  263091 start.go:242] waiting for startup goroutines ...
	I1216 03:04:38.031841  263091 start.go:247] waiting for cluster config update ...
	I1216 03:04:38.031857  263091 start.go:256] writing updated cluster config ...
	I1216 03:04:38.032283  263091 ssh_runner.go:195] Run: rm -f paused
	I1216 03:04:38.035943  263091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:38.040092  263091 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8lk58" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.044415  263091 pod_ready.go:94] pod "coredns-5dd5756b68-8lk58" is "Ready"
	I1216 03:04:38.044438  263091 pod_ready.go:86] duration metric: took 4.325397ms for pod "coredns-5dd5756b68-8lk58" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.047013  263091 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.050901  263091 pod_ready.go:94] pod "etcd-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.050918  263091 pod_ready.go:86] duration metric: took 3.888416ms for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.053525  263091 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.057315  263091 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.057336  263091 pod_ready.go:86] duration metric: took 3.793165ms for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.059556  263091 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.440942  263091 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-073001" is "Ready"
	I1216 03:04:38.440971  263091 pod_ready.go:86] duration metric: took 381.398224ms for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:38.640793  263091 pod_ready.go:83] waiting for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.040807  263091 pod_ready.go:94] pod "kube-proxy-mhxd9" is "Ready"
	I1216 03:04:39.040870  263091 pod_ready.go:86] duration metric: took 400.044513ms for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.241603  263091 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.640048  263091 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-073001" is "Ready"
	I1216 03:04:39.640074  263091 pod_ready.go:86] duration metric: took 398.449646ms for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:39.640084  263091 pod_ready.go:40] duration metric: took 1.604105384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:39.685502  263091 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 03:04:39.687002  263091 out.go:203] 
	W1216 03:04:39.688223  263091 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 03:04:39.689409  263091 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 03:04:39.690775  263091 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-073001" cluster and "default" namespace by default
	W1216 03:04:38.322668  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	W1216 03:04:40.822769  266278 node_ready.go:57] node "no-preload-307185" has "Ready":"False" status (will retry)
	I1216 03:04:41.823191  266278 node_ready.go:49] node "no-preload-307185" is "Ready"
	I1216 03:04:41.823216  266278 node_ready.go:38] duration metric: took 12.503636541s for node "no-preload-307185" to be "Ready" ...
	I1216 03:04:41.823229  266278 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:04:41.823284  266278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:04:41.835480  266278 api_server.go:72] duration metric: took 12.790371447s to wait for apiserver process to appear ...
	I1216 03:04:41.835503  266278 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:04:41.835523  266278 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:04:41.839474  266278 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:04:41.840373  266278 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 03:04:41.840394  266278 api_server.go:131] duration metric: took 4.885221ms to wait for apiserver health ...
	I1216 03:04:41.840401  266278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:04:41.843381  266278 system_pods.go:59] 8 kube-system pods found
	I1216 03:04:41.843409  266278 system_pods.go:61] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:41.843414  266278 system_pods.go:61] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:41.843420  266278 system_pods.go:61] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:41.843424  266278 system_pods.go:61] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:41.843430  266278 system_pods.go:61] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:41.843433  266278 system_pods.go:61] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:41.843436  266278 system_pods.go:61] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:41.843441  266278 system_pods.go:61] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:41.843448  266278 system_pods.go:74] duration metric: took 3.043015ms to wait for pod list to return data ...
	I1216 03:04:41.843457  266278 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:04:41.845934  266278 default_sa.go:45] found service account: "default"
	I1216 03:04:41.845952  266278 default_sa.go:55] duration metric: took 2.489686ms for default service account to be created ...
	I1216 03:04:41.845960  266278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:04:41.848539  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:41.848569  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:41.848578  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:41.848586  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:41.848592  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:41.848601  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:41.848608  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:41.848617  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:41.848626  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:41.848650  266278 retry.go:31] will retry after 282.258049ms: missing components: kube-dns
	I1216 03:04:37.947984  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:37.948015  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:38.009727  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:40.510140  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:40.510568  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:40.510633  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:40.510696  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:40.547520  224341 cri.go:89] found id: "7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:40.547546  224341 cri.go:89] found id: ""
	I1216 03:04:40.547555  224341 logs.go:282] 1 containers: [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646]
	I1216 03:04:40.547609  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.551567  224341 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:40.551622  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:40.593028  224341 cri.go:89] found id: "736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:40.593052  224341 cri.go:89] found id: ""
	I1216 03:04:40.593063  224341 logs.go:282] 1 containers: [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363]
	I1216 03:04:40.593124  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.597202  224341 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:40.597253  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:40.633469  224341 cri.go:89] found id: ""
	I1216 03:04:40.633498  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.633509  224341 logs.go:284] No container was found matching "coredns"
	I1216 03:04:40.633518  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:40.633577  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:40.669134  224341 cri.go:89] found id: "02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:40.669169  224341 cri.go:89] found id: "02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:40.669173  224341 cri.go:89] found id: ""
	I1216 03:04:40.669180  224341 logs.go:282] 2 containers: [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9]
	I1216 03:04:40.669233  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.673156  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.676673  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:40.676724  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:40.711538  224341 cri.go:89] found id: ""
	I1216 03:04:40.711564  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.711572  224341 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:40.711578  224341 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:40.711627  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:40.747041  224341 cri.go:89] found id: "cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:40.747061  224341 cri.go:89] found id: ""
	I1216 03:04:40.747068  224341 logs.go:282] 1 containers: [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c]
	I1216 03:04:40.747132  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.751110  224341 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:40.751180  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:40.786123  224341 cri.go:89] found id: ""
	I1216 03:04:40.786156  224341 logs.go:282] 0 containers: []
	W1216 03:04:40.786167  224341 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:40.786175  224341 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:40.786228  224341 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:40.821410  224341 cri.go:89] found id: "fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:40.821434  224341 cri.go:89] found id: ""
	I1216 03:04:40.821445  224341 logs.go:282] 1 containers: [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0]
	I1216 03:04:40.821502  224341 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.825419  224341 logs.go:123] Gathering logs for kube-controller-manager [cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c] ...
	I1216 03:04:40.825442  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd3a986f28f39afe2d6ba1e7ac84e80ea315cdc14478e8a721012e4621c54c9c"
	I1216 03:04:40.861223  224341 logs.go:123] Gathering logs for storage-provisioner [fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0] ...
	I1216 03:04:40.861254  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fed7d4233eb83c7f4c1fb4bdf230aaafa72cdba6da1c53d5f09a34ebfde901e0"
	I1216 03:04:40.896085  224341 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:40.896113  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:40.952317  224341 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:40.952347  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:41.054168  224341 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:41.054199  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:41.115942  224341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:41.115963  224341 logs.go:123] Gathering logs for kube-apiserver [7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646] ...
	I1216 03:04:41.115978  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c13763cb5591fb23ad920ee4cb197f74e20e6d39b152cc8b373f77406fea646"
	I1216 03:04:41.156998  224341 logs.go:123] Gathering logs for container status ...
	I1216 03:04:41.157028  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:41.196355  224341 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:41.196386  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:41.213363  224341 logs.go:123] Gathering logs for etcd [736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363] ...
	I1216 03:04:41.213394  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 736a1ec7bda023a19951c2c43d767c96d1457accee15f846ff173a11531c7363"
	I1216 03:04:41.261615  224341 logs.go:123] Gathering logs for kube-scheduler [02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba] ...
	I1216 03:04:41.261652  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a5a58e1d176fd58699fcf7f7ac6350bbdb5360d9bcdf1aeff0f2c37dc265ba"
	I1216 03:04:41.348753  224341 logs.go:123] Gathering logs for kube-scheduler [02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9] ...
	I1216 03:04:41.348782  224341 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02407be8b8573d30ff37a2d77c5e432fe5cee3d37d3e464e3743345a77be69c9"
	I1216 03:04:40.025584  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:40.026067  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:40.026114  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:40.026164  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:40.054288  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:40.054307  233647 cri.go:89] found id: ""
	I1216 03:04:40.054316  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:40.054366  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.058203  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:40.058257  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:40.083760  233647 cri.go:89] found id: ""
	I1216 03:04:40.083784  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.083795  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:40.083803  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:40.083898  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:40.110529  233647 cri.go:89] found id: ""
	I1216 03:04:40.110556  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.110574  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:40.110583  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:40.110647  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:40.136367  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:40.136395  233647 cri.go:89] found id: ""
	I1216 03:04:40.136406  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:40.136463  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.140559  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:40.140621  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:40.169045  233647 cri.go:89] found id: ""
	I1216 03:04:40.169075  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.169091  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:40.169099  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:40.169160  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:40.200419  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:40.200443  233647 cri.go:89] found id: ""
	I1216 03:04:40.200452  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:40.200506  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:40.205230  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:40.205288  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:40.231274  233647 cri.go:89] found id: ""
	I1216 03:04:40.231295  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.231304  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:40.231311  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:40.231367  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:40.260341  233647 cri.go:89] found id: ""
	I1216 03:04:40.260361  233647 logs.go:282] 0 containers: []
	W1216 03:04:40.260369  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:40.260377  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:40.260391  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:40.286085  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:40.286111  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:40.312754  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:40.312782  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:40.371864  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:40.371894  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:40.402082  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:40.402112  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:40.485537  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:40.485568  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:40.500011  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:40.500039  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:40.561081  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:40.561108  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:40.561123  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.095990  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:43.096401  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:43.096454  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:43.096500  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:43.123846  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.123866  233647 cri.go:89] found id: ""
	I1216 03:04:43.123873  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:43.123935  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.127889  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:43.127956  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:42.135077  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.135109  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.135114  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.135120  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.135124  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.135128  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.135133  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.135136  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.135140  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.135155  266278 retry.go:31] will retry after 313.389916ms: missing components: kube-dns
	I1216 03:04:42.452939  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.452972  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.452980  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.452988  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.452992  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.452998  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.453003  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.453009  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.453016  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.453036  266278 retry.go:31] will retry after 359.676321ms: missing components: kube-dns
	I1216 03:04:42.816812  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:42.816871  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:04:42.816877  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:42.816883  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:42.816886  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:42.816891  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:42.816894  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:42.816904  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:42.816909  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:04:42.816923  266278 retry.go:31] will retry after 371.549417ms: missing components: kube-dns
	I1216 03:04:43.192959  266278 system_pods.go:86] 8 kube-system pods found
	I1216 03:04:43.192993  266278 system_pods.go:89] "coredns-7d764666f9-nm9bc" [03616ce2-a5c9-473c-b968-8525597cf605] Running
	I1216 03:04:43.192999  266278 system_pods.go:89] "etcd-no-preload-307185" [e422d599-b1a9-4789-9a05-12bdfa726460] Running
	I1216 03:04:43.193003  266278 system_pods.go:89] "kindnet-7zn78" [e5d25c85-cfe3-4ece-aaef-25d832bee145] Running
	I1216 03:04:43.193007  266278 system_pods.go:89] "kube-apiserver-no-preload-307185" [6fc518a0-61de-479c-b521-59763450f0c2] Running
	I1216 03:04:43.193011  266278 system_pods.go:89] "kube-controller-manager-no-preload-307185" [94087293-313f-446b-887b-05f4a1007579] Running
	I1216 03:04:43.193014  266278 system_pods.go:89] "kube-proxy-tp2h2" [029e1cb4-d416-43bc-bd83-2309879667f3] Running
	I1216 03:04:43.193017  266278 system_pods.go:89] "kube-scheduler-no-preload-307185" [943ec2bd-6a44-4b32-9a27-f2452d6d4dab] Running
	I1216 03:04:43.193020  266278 system_pods.go:89] "storage-provisioner" [40130844-03c7-401f-82b6-0676c175fa4b] Running
	I1216 03:04:43.193029  266278 system_pods.go:126] duration metric: took 1.34706308s to wait for k8s-apps to be running ...
	I1216 03:04:43.193038  266278 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:04:43.193086  266278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:43.208031  266278 system_svc.go:56] duration metric: took 14.983786ms WaitForService to wait for kubelet
	I1216 03:04:43.208063  266278 kubeadm.go:587] duration metric: took 14.16295763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:04:43.208088  266278 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:04:43.211118  266278 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:04:43.211150  266278 node_conditions.go:123] node cpu capacity is 8
	I1216 03:04:43.211170  266278 node_conditions.go:105] duration metric: took 3.075802ms to run NodePressure ...
	I1216 03:04:43.211183  266278 start.go:242] waiting for startup goroutines ...
	I1216 03:04:43.211196  266278 start.go:247] waiting for cluster config update ...
	I1216 03:04:43.211222  266278 start.go:256] writing updated cluster config ...
	I1216 03:04:43.211503  266278 ssh_runner.go:195] Run: rm -f paused
	I1216 03:04:43.215638  266278 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:43.219346  266278 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nm9bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.223657  266278 pod_ready.go:94] pod "coredns-7d764666f9-nm9bc" is "Ready"
	I1216 03:04:43.223677  266278 pod_ready.go:86] duration metric: took 4.310872ms for pod "coredns-7d764666f9-nm9bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.225765  266278 pod_ready.go:83] waiting for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.229783  266278 pod_ready.go:94] pod "etcd-no-preload-307185" is "Ready"
	I1216 03:04:43.229810  266278 pod_ready.go:86] duration metric: took 4.024063ms for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.293783  266278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.298594  266278 pod_ready.go:94] pod "kube-apiserver-no-preload-307185" is "Ready"
	I1216 03:04:43.298621  266278 pod_ready.go:86] duration metric: took 4.808ms for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.300770  266278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.619624  266278 pod_ready.go:94] pod "kube-controller-manager-no-preload-307185" is "Ready"
	I1216 03:04:43.619645  266278 pod_ready.go:86] duration metric: took 318.853802ms for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:43.819653  266278 pod_ready.go:83] waiting for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.220236  266278 pod_ready.go:94] pod "kube-proxy-tp2h2" is "Ready"
	I1216 03:04:44.220266  266278 pod_ready.go:86] duration metric: took 400.587068ms for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.420794  266278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.820532  266278 pod_ready.go:94] pod "kube-scheduler-no-preload-307185" is "Ready"
	I1216 03:04:44.820559  266278 pod_ready.go:86] duration metric: took 399.731672ms for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:04:44.820570  266278 pod_ready.go:40] duration metric: took 1.604895974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:04:44.871646  266278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:04:44.876299  266278 out.go:179] * Done! kubectl is now configured to use "no-preload-307185" cluster and "default" namespace by default
	I1216 03:04:43.904075  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:43.904484  224341 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 03:04:43.904539  224341 kubeadm.go:602] duration metric: took 4m14.640353632s to restartPrimaryControlPlane
	W1216 03:04:43.904587  224341 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:04:43.904641  224341 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 03:04:44.619626  224341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:44.631601  224341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:04:44.641269  224341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:44.641332  224341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:44.650573  224341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:44.650596  224341 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:44.650645  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:04:44.660301  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:44.660364  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:44.669651  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:04:44.678800  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:44.678879  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:44.688010  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:04:44.697239  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:44.697310  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:44.705917  224341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:04:44.715673  224341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:44.715726  224341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:44.724368  224341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:44.779778  224341 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:04:44.839245  224341 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:43.154733  233647 cri.go:89] found id: ""
	I1216 03:04:43.154752  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.154759  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:43.154764  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:43.154807  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:43.182338  233647 cri.go:89] found id: ""
	I1216 03:04:43.182362  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.182372  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:43.182379  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:43.182436  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:43.211125  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:43.211146  233647 cri.go:89] found id: ""
	I1216 03:04:43.211166  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:43.211219  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.215454  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:43.215518  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:43.245421  233647 cri.go:89] found id: ""
	I1216 03:04:43.245445  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.245454  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:43.245460  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:43.245508  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:43.271711  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:43.271730  233647 cri.go:89] found id: ""
	I1216 03:04:43.271736  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:43.271785  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:43.275656  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:43.275720  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:43.304227  233647 cri.go:89] found id: ""
	I1216 03:04:43.304248  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.304257  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:43.304262  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:43.304327  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:43.331002  233647 cri.go:89] found id: ""
	I1216 03:04:43.331029  233647 logs.go:282] 0 containers: []
	W1216 03:04:43.331041  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:43.331052  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:43.331073  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:43.345955  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:43.345984  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:43.402576  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:43.402598  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:43.402612  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:43.432899  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:43.432926  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:43.460428  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:43.460457  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:43.486603  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:43.486625  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:43.543843  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:43.543874  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:43.573462  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:43.573493  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:46.156154  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:46.156579  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:46.156642  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:46.156706  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:46.183671  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:46.183702  233647 cri.go:89] found id: ""
	I1216 03:04:46.183713  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:46.183772  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.188151  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:46.188208  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:46.217415  233647 cri.go:89] found id: ""
	I1216 03:04:46.217437  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.217448  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:46.217454  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:46.217511  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:46.244563  233647 cri.go:89] found id: ""
	I1216 03:04:46.244589  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.244596  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:46.244602  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:46.244656  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:46.271475  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:46.271498  233647 cri.go:89] found id: ""
	I1216 03:04:46.271508  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:46.271560  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.275440  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:46.275502  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:46.303741  233647 cri.go:89] found id: ""
	I1216 03:04:46.303763  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.303772  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:46.303779  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:46.303858  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:46.332440  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:46.332459  233647 cri.go:89] found id: ""
	I1216 03:04:46.332468  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:46.332524  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:46.336438  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:46.336493  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:46.364557  233647 cri.go:89] found id: ""
	I1216 03:04:46.364585  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.364597  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:46.364605  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:46.364661  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:46.391606  233647 cri.go:89] found id: ""
	I1216 03:04:46.391634  233647 logs.go:282] 0 containers: []
	W1216 03:04:46.391643  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:46.391652  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:46.391662  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:46.448671  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:46.448702  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:46.448719  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:46.479787  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:46.479831  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:46.507184  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:46.507209  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:46.537405  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:46.537432  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:46.608985  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:46.609016  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:46.644455  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:46.644484  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:46.745339  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:46.745369  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:51.882789  224341 kubeadm.go:319] [init] Using Kubernetes version: v1.32.0
	I1216 03:04:51.882888  224341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:51.883020  224341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:51.883068  224341 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:04:51.883134  224341 kubeadm.go:319] OS: Linux
	I1216 03:04:51.883188  224341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:51.883243  224341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:51.883286  224341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:51.883332  224341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:51.883373  224341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:51.883413  224341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:51.883460  224341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:51.883498  224341 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:04:51.883599  224341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:51.883724  224341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:51.883808  224341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:51.883905  224341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:51.885597  224341 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:51.885663  224341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:51.885720  224341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:51.885793  224341 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:04:51.885883  224341 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:04:51.885969  224341 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:04:51.886047  224341 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:04:51.886113  224341 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:04:51.886180  224341 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:04:51.886297  224341 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:04:51.886395  224341 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:04:51.886458  224341 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:04:51.886529  224341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:51.886603  224341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:51.886689  224341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:51.886782  224341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:51.886891  224341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:51.886984  224341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:51.887104  224341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:51.887200  224341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:51.888377  224341 out.go:252]   - Booting up control plane ...
	I1216 03:04:51.888475  224341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:51.888579  224341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:51.888673  224341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:51.888806  224341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:51.888951  224341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:51.889016  224341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:51.889193  224341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:51.889338  224341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:51.889387  224341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001348297s
	I1216 03:04:51.889451  224341 kubeadm.go:319] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 03:04:51.889496  224341 kubeadm.go:319] [api-check] The API server is healthy after 3.502046107s
	I1216 03:04:51.889590  224341 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:04:51.889694  224341 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:04:51.889758  224341 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:04:51.890004  224341 kubeadm.go:319] [mark-control-plane] Marking the node running-upgrade-146373 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:04:51.890096  224341 kubeadm.go:319] [bootstrap-token] Using token: jywhmz.1yu4qm1hntm5a3yj
	I1216 03:04:51.891460  224341 out.go:252]   - Configuring RBAC rules ...
	I1216 03:04:51.891564  224341 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:04:51.891681  224341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:04:51.891814  224341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:04:51.891985  224341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:04:51.892155  224341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:04:51.892279  224341 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:04:51.892435  224341 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:04:51.892476  224341 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:04:51.892518  224341 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:04:51.892524  224341 kubeadm.go:319] 
	I1216 03:04:51.892576  224341 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:04:51.892584  224341 kubeadm.go:319] 
	I1216 03:04:51.892643  224341 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:04:51.892650  224341 kubeadm.go:319] 
	I1216 03:04:51.892672  224341 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:04:51.892765  224341 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:04:51.892880  224341 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:04:51.892891  224341 kubeadm.go:319] 
	I1216 03:04:51.892972  224341 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:04:51.892984  224341 kubeadm.go:319] 
	I1216 03:04:51.893053  224341 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:04:51.893062  224341 kubeadm.go:319] 
	I1216 03:04:51.893121  224341 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:04:51.893189  224341 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:04:51.893246  224341 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:04:51.893252  224341 kubeadm.go:319] 
	I1216 03:04:51.893321  224341 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:04:51.893386  224341 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:04:51.893393  224341 kubeadm.go:319] 
	I1216 03:04:51.893478  224341 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jywhmz.1yu4qm1hntm5a3yj \
	I1216 03:04:51.893607  224341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:04:51.893639  224341 kubeadm.go:319] 	--control-plane 
	I1216 03:04:51.893648  224341 kubeadm.go:319] 
	I1216 03:04:51.893776  224341 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:04:51.893792  224341 kubeadm.go:319] 
	I1216 03:04:51.893921  224341 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jywhmz.1yu4qm1hntm5a3yj \
	I1216 03:04:51.894029  224341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:04:51.894041  224341 cni.go:84] Creating CNI manager for ""
	I1216 03:04:51.894050  224341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:04:51.895367  224341 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:04:51.896434  224341 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:04:51.900595  224341 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I1216 03:04:51.900609  224341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:04:51.918947  224341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:04:52.140534  224341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:04:52.140728  224341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:04:52.140769  224341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-146373 minikube.k8s.io/updated_at=2025_12_16T03_04_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=running-upgrade-146373 minikube.k8s.io/primary=true
	I1216 03:04:52.253711  224341 kubeadm.go:1114] duration metric: took 113.035858ms to wait for elevateKubeSystemPrivileges
	I1216 03:04:52.253762  224341 ops.go:34] apiserver oom_adj: -16
	I1216 03:04:52.254047  224341 kubeadm.go:403] duration metric: took 4m23.059557116s to StartCluster
	I1216 03:04:52.254084  224341 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:52.254164  224341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:04:52.255977  224341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:04:52.256257  224341 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:04:52.256446  224341 config.go:182] Loaded profile config "running-upgrade-146373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 03:04:52.256506  224341 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:04:52.256584  224341 addons.go:70] Setting storage-provisioner=true in profile "running-upgrade-146373"
	I1216 03:04:52.256606  224341 addons.go:239] Setting addon storage-provisioner=true in "running-upgrade-146373"
	W1216 03:04:52.256616  224341 addons.go:248] addon storage-provisioner should already be in state true
	I1216 03:04:52.256643  224341 host.go:66] Checking if "running-upgrade-146373" exists ...
	I1216 03:04:52.256957  224341 addons.go:70] Setting default-storageclass=true in profile "running-upgrade-146373"
	I1216 03:04:52.257021  224341 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-146373"
	I1216 03:04:52.257335  224341 cli_runner.go:164] Run: docker container inspect running-upgrade-146373 --format={{.State.Status}}
	I1216 03:04:52.258348  224341 cli_runner.go:164] Run: docker container inspect running-upgrade-146373 --format={{.State.Status}}
	I1216 03:04:52.258374  224341 out.go:179] * Verifying Kubernetes components...
	I1216 03:04:52.259967  224341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:04:52.284773  224341 kapi.go:59] client config for running-upgrade-146373: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:04:52.285506  224341 addons.go:239] Setting addon default-storageclass=true in "running-upgrade-146373"
	W1216 03:04:52.285531  224341 addons.go:248] addon default-storageclass should already be in state true
	I1216 03:04:52.285581  224341 host.go:66] Checking if "running-upgrade-146373" exists ...
	I1216 03:04:52.286103  224341 cli_runner.go:164] Run: docker container inspect running-upgrade-146373 --format={{.State.Status}}
	I1216 03:04:52.286860  224341 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:04:52.288241  224341 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:52.288258  224341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:04:52.288312  224341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-146373
	I1216 03:04:52.317017  224341 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:52.317038  224341 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:04:52.317093  224341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-146373
	I1216 03:04:52.317220  224341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/running-upgrade-146373/id_rsa Username:docker}
	I1216 03:04:52.339164  224341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/running-upgrade-146373/id_rsa Username:docker}
	I1216 03:04:52.386211  224341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:04:52.399491  224341 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:04:52.399549  224341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:04:52.412061  224341 api_server.go:72] duration metric: took 155.766523ms to wait for apiserver process to appear ...
	I1216 03:04:52.412086  224341 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:04:52.412109  224341 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 03:04:52.417387  224341 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 03:04:52.425893  224341 api_server.go:141] control plane version: v1.32.0
	I1216 03:04:52.425927  224341 api_server.go:131] duration metric: took 13.833031ms to wait for apiserver health ...
	I1216 03:04:52.425939  224341 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:04:52.429949  224341 system_pods.go:59] 4 kube-system pods found
	I1216 03:04:52.429994  224341 system_pods.go:61] "etcd-running-upgrade-146373" [fa0fa4f0-4425-4042-bebe-9b9cf95f58fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:04:52.430006  224341 system_pods.go:61] "kube-apiserver-running-upgrade-146373" [e034c587-52f2-4884-9044-9150b2b8bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:04:52.430017  224341 system_pods.go:61] "kube-controller-manager-running-upgrade-146373" [52275cde-fd30-4d2b-89bd-b85c9439e480] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:04:52.430026  224341 system_pods.go:61] "kube-scheduler-running-upgrade-146373" [532d3cd6-26b2-4fba-b41c-c9f75e1a012f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:04:52.430034  224341 system_pods.go:74] duration metric: took 4.086371ms to wait for pod list to return data ...
	I1216 03:04:52.430048  224341 kubeadm.go:587] duration metric: took 173.755986ms to wait for: map[apiserver:true system_pods:true]
	I1216 03:04:52.430062  224341 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:04:52.431992  224341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:04:52.433465  224341 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:04:52.433526  224341 node_conditions.go:123] node cpu capacity is 8
	I1216 03:04:52.433546  224341 node_conditions.go:105] duration metric: took 3.478197ms to run NodePressure ...
	I1216 03:04:52.433560  224341 start.go:242] waiting for startup goroutines ...
	I1216 03:04:52.450348  224341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:04:52.755589  224341 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:04:52.756887  224341 addons.go:530] duration metric: took 500.382348ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:04:52.756923  224341 start.go:247] waiting for cluster config update ...
	I1216 03:04:52.756933  224341 start.go:256] writing updated cluster config ...
	I1216 03:04:52.757143  224341 ssh_runner.go:195] Run: rm -f paused
	I1216 03:04:52.808106  224341 start.go:625] kubectl: 1.34.3, cluster: 1.32.0 (minor skew: 2)
	I1216 03:04:52.810125  224341 out.go:203] 
	W1216 03:04:52.811461  224341 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.32.0.
	I1216 03:04:52.812581  224341 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1216 03:04:52.814059  224341 out.go:179] * Done! kubectl is now configured to use "running-upgrade-146373" cluster and "default" namespace by default
	I1216 03:04:49.260957  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:49.261377  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:49.261437  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:49.261485  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:49.291541  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:49.291566  233647 cri.go:89] found id: ""
	I1216 03:04:49.291577  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:49.291630  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:49.296538  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:49.296604  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:49.325080  233647 cri.go:89] found id: ""
	I1216 03:04:49.325105  233647 logs.go:282] 0 containers: []
	W1216 03:04:49.325116  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:49.325124  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:49.325179  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:49.354438  233647 cri.go:89] found id: ""
	I1216 03:04:49.354459  233647 logs.go:282] 0 containers: []
	W1216 03:04:49.354466  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:49.354472  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:49.354511  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:49.382811  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:49.382845  233647 cri.go:89] found id: ""
	I1216 03:04:49.382855  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:49.382932  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:49.387204  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:49.387264  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:49.418634  233647 cri.go:89] found id: ""
	I1216 03:04:49.418664  233647 logs.go:282] 0 containers: []
	W1216 03:04:49.418675  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:49.418683  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:49.418743  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:49.447913  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:49.447931  233647 cri.go:89] found id: ""
	I1216 03:04:49.447938  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:49.447983  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:49.451932  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:49.451984  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:49.481512  233647 cri.go:89] found id: ""
	I1216 03:04:49.481614  233647 logs.go:282] 0 containers: []
	W1216 03:04:49.481629  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:49.481640  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:49.481700  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:49.512007  233647 cri.go:89] found id: ""
	I1216 03:04:49.512034  233647 logs.go:282] 0 containers: []
	W1216 03:04:49.512045  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:49.512057  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:49.512072  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:49.542528  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:49.542556  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:49.569992  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:49.570018  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:49.632527  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:49.632559  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:04:49.663438  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:49.663467  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:49.781088  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:49.781121  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:49.798258  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:49.798294  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:49.872292  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:49.872315  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:49.872336  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:52.412872  233647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:04:52.413225  233647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 03:04:52.413282  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:04:52.413335  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:04:52.448191  233647 cri.go:89] found id: "c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:52.448215  233647 cri.go:89] found id: ""
	I1216 03:04:52.448226  233647 logs.go:282] 1 containers: [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302]
	I1216 03:04:52.448279  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:52.452581  233647 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 03:04:52.452640  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:04:52.484525  233647 cri.go:89] found id: ""
	I1216 03:04:52.484552  233647 logs.go:282] 0 containers: []
	W1216 03:04:52.484564  233647 logs.go:284] No container was found matching "etcd"
	I1216 03:04:52.484572  233647 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 03:04:52.484633  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:04:52.518853  233647 cri.go:89] found id: ""
	I1216 03:04:52.518882  233647 logs.go:282] 0 containers: []
	W1216 03:04:52.518900  233647 logs.go:284] No container was found matching "coredns"
	I1216 03:04:52.518908  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:04:52.518968  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:04:52.556660  233647 cri.go:89] found id: "5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:52.556687  233647 cri.go:89] found id: ""
	I1216 03:04:52.556698  233647 logs.go:282] 1 containers: [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df]
	I1216 03:04:52.556758  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:52.562109  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:04:52.562182  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:04:52.595948  233647 cri.go:89] found id: ""
	I1216 03:04:52.595977  233647 logs.go:282] 0 containers: []
	W1216 03:04:52.595988  233647 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:04:52.595995  233647 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:04:52.596053  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:04:52.627312  233647 cri.go:89] found id: "534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:52.627332  233647 cri.go:89] found id: ""
	I1216 03:04:52.627341  233647 logs.go:282] 1 containers: [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b]
	I1216 03:04:52.627392  233647 ssh_runner.go:195] Run: which crictl
	I1216 03:04:52.632438  233647 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 03:04:52.632499  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:04:52.666649  233647 cri.go:89] found id: ""
	I1216 03:04:52.666679  233647 logs.go:282] 0 containers: []
	W1216 03:04:52.666706  233647 logs.go:284] No container was found matching "kindnet"
	I1216 03:04:52.666715  233647 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:04:52.666772  233647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:04:52.699911  233647 cri.go:89] found id: ""
	I1216 03:04:52.699934  233647 logs.go:282] 0 containers: []
	W1216 03:04:52.699944  233647 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:04:52.699960  233647 logs.go:123] Gathering logs for kubelet ...
	I1216 03:04:52.699972  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:04:52.804946  233647 logs.go:123] Gathering logs for dmesg ...
	I1216 03:04:52.804977  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:04:52.820613  233647 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:04:52.820638  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:04:52.900572  233647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:04:52.900689  233647 logs.go:123] Gathering logs for kube-apiserver [c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302] ...
	I1216 03:04:52.900735  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c35e52adfd47eb2e4b92ae0beab0632baf63b5410d8fc6448143d11aa2f5b302"
	I1216 03:04:52.939491  233647 logs.go:123] Gathering logs for kube-scheduler [5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df] ...
	I1216 03:04:52.939519  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b4d62110a7fcb5f8a1a19e8b8bc33d4bbf40d33c96369ca22639cf16f0a35df"
	I1216 03:04:52.969494  233647 logs.go:123] Gathering logs for kube-controller-manager [534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b] ...
	I1216 03:04:52.969530  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 534eaad750cde7ff943219f7501ed74d7bf670797962e5c5d90000a96b35500b"
	I1216 03:04:53.000070  233647 logs.go:123] Gathering logs for CRI-O ...
	I1216 03:04:53.000103  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 03:04:53.069050  233647 logs.go:123] Gathering logs for container status ...
	I1216 03:04:53.069087  233647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Dec 16 03:04:42 no-preload-307185 crio[765]: time="2025-12-16T03:04:42.111609092Z" level=info msg="Starting container: 70a51f8cf42fc45e1077ada3b5140412f5af51689b25b82085b66b019ce31d89" id=719b9993-a390-4e5d-98c6-1d5af553d3e9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:04:42 no-preload-307185 crio[765]: time="2025-12-16T03:04:42.113659976Z" level=info msg="Started container" PID=2838 containerID=70a51f8cf42fc45e1077ada3b5140412f5af51689b25b82085b66b019ce31d89 description=kube-system/coredns-7d764666f9-nm9bc/coredns id=719b9993-a390-4e5d-98c6-1d5af553d3e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a9f8157c12bea0667534e19576ade987defcbc00ecba0c677d3ebe803781ea4
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.350248933Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2a7014f9-9f63-4a15-82a2-0208d5e371df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.350330304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.355600362Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:286a698efe82f35dc2d9964659b35920dd4b7ef419123e474dace536937ca188 UID:c5a8e168-08bb-4b5c-ab8b-3f7814bcd923 NetNS:/var/run/netns/109b5b02-958c-4297-8290-a923907c1c24 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000133658}] Aliases:map[]}"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.355630985Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.366506769Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:286a698efe82f35dc2d9964659b35920dd4b7ef419123e474dace536937ca188 UID:c5a8e168-08bb-4b5c-ab8b-3f7814bcd923 NetNS:/var/run/netns/109b5b02-958c-4297-8290-a923907c1c24 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000133658}] Aliases:map[]}"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.366677333Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.367414413Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.368231996Z" level=info msg="Ran pod sandbox 286a698efe82f35dc2d9964659b35920dd4b7ef419123e474dace536937ca188 with infra container: default/busybox/POD" id=2a7014f9-9f63-4a15-82a2-0208d5e371df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.369510725Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d414af4a-f8e0-4ad1-8218-d3ef4faaabb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.369645336Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d414af4a-f8e0-4ad1-8218-d3ef4faaabb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.369681018Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d414af4a-f8e0-4ad1-8218-d3ef4faaabb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.370556573Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8dda0ef-b079-4cbd-906d-1a933823b97b name=/runtime.v1.ImageService/PullImage
	Dec 16 03:04:45 no-preload-307185 crio[765]: time="2025-12-16T03:04:45.372344907Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.693398446Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f8dda0ef-b079-4cbd-906d-1a933823b97b name=/runtime.v1.ImageService/PullImage
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.693994684Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2835dab8-aa3a-45ae-b50f-e9f9d7d692a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.695472338Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=834ee93e-c781-4925-abec-ec79e1f74014 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.698789363Z" level=info msg="Creating container: default/busybox/busybox" id=478860d3-f27c-419f-a229-a28cc5bbcd49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.69893066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.702154007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.702561331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.729881072Z" level=info msg="Created container 2caecc59112f29b73f4e30559af458e55ce5ca17646630444d896afc4dd7e33a: default/busybox/busybox" id=478860d3-f27c-419f-a229-a28cc5bbcd49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.730511207Z" level=info msg="Starting container: 2caecc59112f29b73f4e30559af458e55ce5ca17646630444d896afc4dd7e33a" id=8df64779-611d-49bf-b896-3a90c8cec3f9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:04:46 no-preload-307185 crio[765]: time="2025-12-16T03:04:46.732546657Z" level=info msg="Started container" PID=2912 containerID=2caecc59112f29b73f4e30559af458e55ce5ca17646630444d896afc4dd7e33a description=default/busybox/busybox id=8df64779-611d-49bf-b896-3a90c8cec3f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=286a698efe82f35dc2d9964659b35920dd4b7ef419123e474dace536937ca188
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2caecc59112f2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   286a698efe82f       busybox                                     default
	70a51f8cf42fc       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   7a9f8157c12be       coredns-7d764666f9-nm9bc                    kube-system
	4b1a1a26d0f46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   972d2cb505e2d       storage-provisioner                         kube-system
	66f892847f7f9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   a573465298066       kindnet-7zn78                               kube-system
	697de9a34672f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      24 seconds ago      Running             kube-proxy                0                   8b987f4839baa       kube-proxy-tp2h2                            kube-system
	de5b7828f7a11       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   73a31f02c090b       kube-scheduler-no-preload-307185            kube-system
	952a43b024a2d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   9760a0e51b39e       kube-controller-manager-no-preload-307185   kube-system
	0bc22a67daeda       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   81005e2b6d943       kube-apiserver-no-preload-307185            kube-system
	d7e91b7a5ecb4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   5507b4dd7259c       etcd-no-preload-307185                      kube-system
	
	
	==> coredns [70a51f8cf42fc45e1077ada3b5140412f5af51689b25b82085b66b019ce31d89] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49594 - 30874 "HINFO IN 8106507861850177422.2820600425553679836. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018741774s
	
	
	==> describe nodes <==
	Name:               no-preload-307185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-307185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=no-preload-307185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-307185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:04:53 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:04:53 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:04:53 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:04:53 +0000   Tue, 16 Dec 2025 03:04:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-307185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                a794d9e9-b632-4191-ab05-a56c4459c52f
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-7d764666f9-nm9bc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-307185                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-7zn78                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-307185             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-307185    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-tp2h2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-307185             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-307185 event: Registered Node no-preload-307185 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [d7e91b7a5ecb4bea06cfa849ac5b08dae8a544e94e1e3c86740bd42534aa8cdd] <==
	{"level":"warn","ts":"2025-12-16T03:04:19.416633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.425169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.431606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.438433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.444349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.450727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.460950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.467041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.473648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.480452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.487960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.495656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.503094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.510268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.517350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.524867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.532275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.543712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.550935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.558489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.565856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:04:19.623069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:04:21.339980Z","caller":"traceutil/trace.go:172","msg":"trace[1448571288] transaction","detail":"{read_only:false; response_revision:108; number_of_response:1; }","duration":"162.293972ms","start":"2025-12-16T03:04:21.177653Z","end":"2025-12-16T03:04:21.339947Z","steps":["trace[1448571288] 'process raft request'  (duration: 158.896303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:04:21.560997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.261917ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766864977126986 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-16T03:04:21.561117Z","caller":"traceutil/trace.go:172","msg":"trace[593092676] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"158.532018ms","start":"2025-12-16T03:04:21.402571Z","end":"2025-12-16T03:04:21.561103Z","steps":["trace[593092676] 'process raft request'  (duration: 57.713706ms)","trace[593092676] 'compare'  (duration: 100.158692ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:04:53 up 47 min,  0 user,  load average: 3.16, 2.41, 1.68
	Linux no-preload-307185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66f892847f7f9b8d4a93c3695a48f26a5a03f61c4b4b8457a499446c62eb47c5] <==
	I1216 03:04:30.994434       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:04:30.994747       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 03:04:31.057057       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:04:31.057081       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:04:31.057110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:04:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:04:31.260127       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:04:31.357032       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:04:31.357069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:04:31.395777       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:04:31.657063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:04:31.657098       1 metrics.go:72] Registering metrics
	I1216 03:04:31.657148       1 controller.go:711] "Syncing nftables rules"
	I1216 03:04:41.260996       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:04:41.261052       1 main.go:301] handling current node
	I1216 03:04:51.263328       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:04:51.263363       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0bc22a67daeda78f9bd0cf27a65b0c46c6775b666e95bd5ad477d0dc95cd3b88] <==
	I1216 03:04:20.093741       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:04:20.093750       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:04:20.096371       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1216 03:04:20.096412       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:04:20.103312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:04:20.295746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:04:21.042176       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1216 03:04:21.107630       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1216 03:04:21.107657       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:04:22.012923       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:04:22.046276       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:04:22.104793       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 03:04:22.111012       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1216 03:04:22.111928       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:04:22.115483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:04:22.998006       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:04:23.002944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:04:23.012037       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 03:04:23.019667       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:04:28.689861       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:04:28.694254       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:04:28.987916       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:04:29.038959       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 03:04:29.038959       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1216 03:04:52.156734       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:57796: use of closed network connection
	
	
	==> kube-controller-manager [952a43b024a2d6483644d48abab561cf995d7818436cce0d51344f9e68b1c540] <==
	I1216 03:04:27.846147       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846167       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846216       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846290       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846331       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846380       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.845211       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:04:27.846551       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847529       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-307185"
	I1216 03:04:27.847582       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 03:04:27.847096       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847056       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.846799       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847178       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847190       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847162       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.847170       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.848238       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:04:27.857069       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-307185" podCIDRs=["10.244.0.0/24"]
	I1216 03:04:27.857172       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.944724       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:27.944757       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:04:27.944762       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:04:27.948740       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:42.850036       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [697de9a34672f1b1bcb353f6d21eaa248a71215733c5e179ae006872923f4342] <==
	I1216 03:04:29.475935       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:04:29.555130       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:04:29.656279       1 shared_informer.go:377] "Caches are synced"
	I1216 03:04:29.656326       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 03:04:29.656464       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:04:29.676874       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:04:29.676944       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:04:29.682691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:04:29.683174       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:04:29.683195       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:04:29.684463       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:04:29.684486       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:04:29.684883       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:04:29.684926       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:04:29.685310       1 config.go:309] "Starting node config controller"
	I1216 03:04:29.685360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:04:29.685385       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:04:29.685415       1 config.go:200] "Starting service config controller"
	I1216 03:04:29.685438       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:04:29.785489       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:04:29.785612       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:04:29.785665       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de5b7828f7a11fe00f0004266fac086997deb4f7491d2f2f39fc545c7f0922ce] <==
	E1216 03:04:21.030910       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1216 03:04:21.031624       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1216 03:04:21.187880       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 03:04:21.188761       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1216 03:04:21.256500       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1216 03:04:21.257527       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 03:04:21.329895       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1216 03:04:21.330981       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1216 03:04:21.369668       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 03:04:21.370690       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1216 03:04:21.371667       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1216 03:04:21.372513       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1216 03:04:21.454298       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1216 03:04:21.455254       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1216 03:04:21.483568       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 03:04:21.484569       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1216 03:04:21.503029       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1216 03:04:21.504131       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1216 03:04:21.526628       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1216 03:04:21.527789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1216 03:04:21.576693       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1216 03:04:21.577900       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1216 03:04:21.651672       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1216 03:04:21.652621       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	I1216 03:04:23.849627       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155150    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7c85\" (UniqueName: \"kubernetes.io/projected/e5d25c85-cfe3-4ece-aaef-25d832bee145-kube-api-access-h7c85\") pod \"kindnet-7zn78\" (UID: \"e5d25c85-cfe3-4ece-aaef-25d832bee145\") " pod="kube-system/kindnet-7zn78"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155178    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029e1cb4-d416-43bc-bd83-2309879667f3-xtables-lock\") pod \"kube-proxy-tp2h2\" (UID: \"029e1cb4-d416-43bc-bd83-2309879667f3\") " pod="kube-system/kube-proxy-tp2h2"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155226    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4z8h\" (UniqueName: \"kubernetes.io/projected/029e1cb4-d416-43bc-bd83-2309879667f3-kube-api-access-n4z8h\") pod \"kube-proxy-tp2h2\" (UID: \"029e1cb4-d416-43bc-bd83-2309879667f3\") " pod="kube-system/kube-proxy-tp2h2"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155251    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5d25c85-cfe3-4ece-aaef-25d832bee145-xtables-lock\") pod \"kindnet-7zn78\" (UID: \"e5d25c85-cfe3-4ece-aaef-25d832bee145\") " pod="kube-system/kindnet-7zn78"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155269    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5d25c85-cfe3-4ece-aaef-25d832bee145-lib-modules\") pod \"kindnet-7zn78\" (UID: \"e5d25c85-cfe3-4ece-aaef-25d832bee145\") " pod="kube-system/kindnet-7zn78"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155288    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029e1cb4-d416-43bc-bd83-2309879667f3-lib-modules\") pod \"kube-proxy-tp2h2\" (UID: \"029e1cb4-d416-43bc-bd83-2309879667f3\") " pod="kube-system/kube-proxy-tp2h2"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.155313    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/029e1cb4-d416-43bc-bd83-2309879667f3-kube-proxy\") pod \"kube-proxy-tp2h2\" (UID: \"029e1cb4-d416-43bc-bd83-2309879667f3\") " pod="kube-system/kube-proxy-tp2h2"
	Dec 16 03:04:29 no-preload-307185 kubelet[2221]: I1216 03:04:29.874184    2221 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tp2h2" podStartSLOduration=0.874166677 podStartE2EDuration="874.166677ms" podCreationTimestamp="2025-12-16 03:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:29.87396996 +0000 UTC m=+7.130417058" watchObservedRunningTime="2025-12-16 03:04:29.874166677 +0000 UTC m=+7.130613778"
	Dec 16 03:04:34 no-preload-307185 kubelet[2221]: E1216 03:04:34.122803    2221 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-307185" containerName="kube-controller-manager"
	Dec 16 03:04:34 no-preload-307185 kubelet[2221]: I1216 03:04:34.134905    2221 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7zn78" podStartSLOduration=3.7661709439999997 podStartE2EDuration="5.134889399s" podCreationTimestamp="2025-12-16 03:04:29 +0000 UTC" firstStartedPulling="2025-12-16 03:04:29.379031895 +0000 UTC m=+6.635478974" lastFinishedPulling="2025-12-16 03:04:30.747750337 +0000 UTC m=+8.004197429" observedRunningTime="2025-12-16 03:04:30.877331057 +0000 UTC m=+8.133778157" watchObservedRunningTime="2025-12-16 03:04:34.134889399 +0000 UTC m=+11.391336498"
	Dec 16 03:04:34 no-preload-307185 kubelet[2221]: E1216 03:04:34.807976    2221 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-307185" containerName="kube-apiserver"
	Dec 16 03:04:36 no-preload-307185 kubelet[2221]: E1216 03:04:36.304795    2221 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-307185" containerName="etcd"
	Dec 16 03:04:38 no-preload-307185 kubelet[2221]: E1216 03:04:38.745060    2221 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-307185" containerName="kube-scheduler"
	Dec 16 03:04:41 no-preload-307185 kubelet[2221]: I1216 03:04:41.738220    2221 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 16 03:04:41 no-preload-307185 kubelet[2221]: I1216 03:04:41.854323    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/40130844-03c7-401f-82b6-0676c175fa4b-tmp\") pod \"storage-provisioner\" (UID: \"40130844-03c7-401f-82b6-0676c175fa4b\") " pod="kube-system/storage-provisioner"
	Dec 16 03:04:41 no-preload-307185 kubelet[2221]: I1216 03:04:41.854360    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9jj\" (UniqueName: \"kubernetes.io/projected/40130844-03c7-401f-82b6-0676c175fa4b-kube-api-access-gr9jj\") pod \"storage-provisioner\" (UID: \"40130844-03c7-401f-82b6-0676c175fa4b\") " pod="kube-system/storage-provisioner"
	Dec 16 03:04:41 no-preload-307185 kubelet[2221]: I1216 03:04:41.854379    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03616ce2-a5c9-473c-b968-8525597cf605-config-volume\") pod \"coredns-7d764666f9-nm9bc\" (UID: \"03616ce2-a5c9-473c-b968-8525597cf605\") " pod="kube-system/coredns-7d764666f9-nm9bc"
	Dec 16 03:04:41 no-preload-307185 kubelet[2221]: I1216 03:04:41.854397    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/03616ce2-a5c9-473c-b968-8525597cf605-kube-api-access-lvcrx\") pod \"coredns-7d764666f9-nm9bc\" (UID: \"03616ce2-a5c9-473c-b968-8525597cf605\") " pod="kube-system/coredns-7d764666f9-nm9bc"
	Dec 16 03:04:42 no-preload-307185 kubelet[2221]: E1216 03:04:42.891380    2221 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nm9bc" containerName="coredns"
	Dec 16 03:04:42 no-preload-307185 kubelet[2221]: I1216 03:04:42.899589    2221 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.89957298 podStartE2EDuration="13.89957298s" podCreationTimestamp="2025-12-16 03:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:42.899312675 +0000 UTC m=+20.155759774" watchObservedRunningTime="2025-12-16 03:04:42.89957298 +0000 UTC m=+20.156020080"
	Dec 16 03:04:42 no-preload-307185 kubelet[2221]: I1216 03:04:42.909836    2221 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nm9bc" podStartSLOduration=13.909799227 podStartE2EDuration="13.909799227s" podCreationTimestamp="2025-12-16 03:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:04:42.909589438 +0000 UTC m=+20.166036540" watchObservedRunningTime="2025-12-16 03:04:42.909799227 +0000 UTC m=+20.166246327"
	Dec 16 03:04:43 no-preload-307185 kubelet[2221]: E1216 03:04:43.893050    2221 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nm9bc" containerName="coredns"
	Dec 16 03:04:44 no-preload-307185 kubelet[2221]: E1216 03:04:44.895479    2221 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nm9bc" containerName="coredns"
	Dec 16 03:04:45 no-preload-307185 kubelet[2221]: I1216 03:04:45.077277    2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4b4\" (UniqueName: \"kubernetes.io/projected/c5a8e168-08bb-4b5c-ab8b-3f7814bcd923-kube-api-access-cr4b4\") pod \"busybox\" (UID: \"c5a8e168-08bb-4b5c-ab8b-3f7814bcd923\") " pod="default/busybox"
	Dec 16 03:04:46 no-preload-307185 kubelet[2221]: I1216 03:04:46.917213    2221 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.592547785 podStartE2EDuration="1.917195877s" podCreationTimestamp="2025-12-16 03:04:45 +0000 UTC" firstStartedPulling="2025-12-16 03:04:45.370079134 +0000 UTC m=+22.626526226" lastFinishedPulling="2025-12-16 03:04:46.69472724 +0000 UTC m=+23.951174318" observedRunningTime="2025-12-16 03:04:46.916952837 +0000 UTC m=+24.173399936" watchObservedRunningTime="2025-12-16 03:04:46.917195877 +0000 UTC m=+24.173642976"
	
	
	==> storage-provisioner [4b1a1a26d0f46e981197ab0a21a4a75d55da2c05ad194d76a90dd501ca7d447e] <==
	I1216 03:04:42.123774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:04:42.130719       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:04:42.130788       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:04:42.132856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:42.137468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:04:42.137586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:04:42.137704       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81d1dfde-a7b2-428c-90b5-bc639acfdd4f", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-307185_fad171f9-87a7-4c64-8545-c0906a663162 became leader
	I1216 03:04:42.137762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-307185_fad171f9-87a7-4c64-8545-c0906a663162!
	W1216 03:04:42.139183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:42.143033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:04:42.238540       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-307185_fad171f9-87a7-4c64-8545-c0906a663162!
	W1216 03:04:44.146128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:44.150962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:46.153923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:46.159336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:48.162917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:48.167168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:50.171064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:50.177039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:52.181543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:04:52.187914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-307185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.989155ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:05:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-079165 describe deploy/metrics-server -n kube-system: exit status 1 (59.174262ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-079165 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079165
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079165:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	        "Created": "2025-12-16T03:05:00.382441166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:00.422845656Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hosts",
	        "LogPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7-json.log",
	        "Name": "/default-k8s-diff-port-079165",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079165:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079165",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	                "LowerDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079165",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079165/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079165",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6ca28ff6029f0a318620ce1e3ac75d084cb09c1a454fd209e79031e2acebae21",
	            "SandboxKey": "/var/run/docker/netns/6ca28ff6029f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079165": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5282d64d27b5a2514f04f90d1cd32aa132a110f71ffb368ba477ac385094fbb2",
	                    "EndpointID": "e3c78603de7b9fd0185131e3bd50e9847b75b26da354a370a4ce74a4663e2813",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "aa:0c:67:e4:ed:95",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079165",
	                        "17c3b6c10d0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25: (1.00656841s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p cilium-646016 sudo containerd config dump                                                                                                                                                                                                         │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                  │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo crio config                                                                                                                                                                                                                    │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p cilium-646016                                                                                                                                                                                                                                     │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                              │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                               │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                            │ running-upgrade-146373       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-307185 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:05:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:05:39.730598  291579 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:05:39.730725  291579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:05:39.730731  291579 out.go:374] Setting ErrFile to fd 2...
	I1216 03:05:39.730749  291579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:05:39.730953  291579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:05:39.731410  291579 out.go:368] Setting JSON to false
	I1216 03:05:39.732635  291579 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2892,"bootTime":1765851448,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:05:39.732698  291579 start.go:143] virtualization: kvm guest
	I1216 03:05:39.734754  291579 out.go:179] * [newest-cni-991316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:05:39.736080  291579 notify.go:221] Checking for updates...
	I1216 03:05:39.736091  291579 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:05:39.737511  291579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:05:39.738869  291579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:05:39.740112  291579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:05:39.741285  291579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:05:39.742451  291579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:05:39.744149  291579 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:05:39.744246  291579 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:05:39.744336  291579 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:05:39.744422  291579 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:05:39.768376  291579 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:05:39.768483  291579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:05:39.824781  291579 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:05:39.815112471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:05:39.824899  291579 docker.go:319] overlay module found
	I1216 03:05:39.826778  291579 out.go:179] * Using the docker driver based on user configuration
	I1216 03:05:39.828053  291579 start.go:309] selected driver: docker
	I1216 03:05:39.828068  291579 start.go:927] validating driver "docker" against <nil>
	I1216 03:05:39.828085  291579 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:05:39.828728  291579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:05:39.884616  291579 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:05:39.874438346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:05:39.884754  291579 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 03:05:39.884776  291579 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 03:05:39.885031  291579 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:05:39.887314  291579 out.go:179] * Using Docker driver with root privileges
	I1216 03:05:39.888645  291579 cni.go:84] Creating CNI manager for ""
	I1216 03:05:39.888721  291579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:05:39.888734  291579 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:05:39.888847  291579 start.go:353] cluster config:
	{Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:05:39.890796  291579 out.go:179] * Starting "newest-cni-991316" primary control-plane node in "newest-cni-991316" cluster
	I1216 03:05:39.891939  291579 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:05:39.893260  291579 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:05:39.894477  291579 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:05:39.894512  291579 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 03:05:39.894534  291579 cache.go:65] Caching tarball of preloaded images
	I1216 03:05:39.894583  291579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:05:39.894660  291579 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:05:39.894676  291579 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 03:05:39.894859  291579 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/config.json ...
	I1216 03:05:39.894900  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/config.json: {Name:mk53b86e54227a82ea407295920ea6a951713ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:39.916003  291579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:05:39.916019  291579 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:05:39.916037  291579 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:05:39.916070  291579 start.go:360] acquireMachinesLock for newest-cni-991316: {Name:mk9391ad712ba901fa12a9274aabaadfeece5f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:05:39.916173  291579 start.go:364] duration metric: took 82.807µs to acquireMachinesLock for "newest-cni-991316"
	I1216 03:05:39.916200  291579 start.go:93] Provisioning new machine with config: &{Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:05:39.916296  291579 start.go:125] createHost starting for "" (driver="docker")
	W1216 03:05:37.027562  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:39.526470  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:41.527386  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:38.436055  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:40.437038  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	I1216 03:05:39.918710  291579 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:05:39.919028  291579 start.go:159] libmachine.API.Create for "newest-cni-991316" (driver="docker")
	I1216 03:05:39.919064  291579 client.go:173] LocalClient.Create starting
	I1216 03:05:39.919140  291579 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:05:39.919186  291579 main.go:143] libmachine: Decoding PEM data...
	I1216 03:05:39.919216  291579 main.go:143] libmachine: Parsing certificate...
	I1216 03:05:39.919308  291579 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:05:39.919340  291579 main.go:143] libmachine: Decoding PEM data...
	I1216 03:05:39.919357  291579 main.go:143] libmachine: Parsing certificate...
	I1216 03:05:39.919786  291579 cli_runner.go:164] Run: docker network inspect newest-cni-991316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:05:39.938120  291579 cli_runner.go:211] docker network inspect newest-cni-991316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:05:39.938198  291579 network_create.go:284] running [docker network inspect newest-cni-991316] to gather additional debugging logs...
	I1216 03:05:39.938224  291579 cli_runner.go:164] Run: docker network inspect newest-cni-991316
	W1216 03:05:39.954877  291579 cli_runner.go:211] docker network inspect newest-cni-991316 returned with exit code 1
	I1216 03:05:39.954901  291579 network_create.go:287] error running [docker network inspect newest-cni-991316]: docker network inspect newest-cni-991316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-991316 not found
	I1216 03:05:39.954921  291579 network_create.go:289] output of [docker network inspect newest-cni-991316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-991316 not found
	
	** /stderr **
	I1216 03:05:39.955014  291579 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:05:39.971958  291579 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:05:39.972619  291579 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:05:39.973518  291579 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:05:39.974428  291579 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f2c240}
	I1216 03:05:39.974454  291579 network_create.go:124] attempt to create docker network newest-cni-991316 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:05:39.974502  291579 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-991316 newest-cni-991316
	I1216 03:05:40.025935  291579 network_create.go:108] docker network newest-cni-991316 192.168.76.0/24 created
	I1216 03:05:40.025965  291579 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-991316" container
	I1216 03:05:40.026048  291579 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:05:40.044058  291579 cli_runner.go:164] Run: docker volume create newest-cni-991316 --label name.minikube.sigs.k8s.io=newest-cni-991316 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:05:40.062349  291579 oci.go:103] Successfully created a docker volume newest-cni-991316
	I1216 03:05:40.062415  291579 cli_runner.go:164] Run: docker run --rm --name newest-cni-991316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-991316 --entrypoint /usr/bin/test -v newest-cni-991316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:05:40.477363  291579 oci.go:107] Successfully prepared a docker volume newest-cni-991316
	I1216 03:05:40.477552  291579 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:05:40.477571  291579 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:05:40.477627  291579 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-991316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:05:44.358059  291579 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-991316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.880387316s)
	I1216 03:05:44.358093  291579 kic.go:203] duration metric: took 3.88051827s to extract preloaded images to volume ...
	W1216 03:05:44.358191  291579 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:05:44.358247  291579 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:05:44.358295  291579 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:05:44.413162  291579 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-991316 --name newest-cni-991316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-991316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-991316 --network newest-cni-991316 --ip 192.168.76.2 --volume newest-cni-991316:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:05:44.689560  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Running}}
	I1216 03:05:44.709263  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:05:44.728364  291579 cli_runner.go:164] Run: docker exec newest-cni-991316 stat /var/lib/dpkg/alternatives/iptables
	W1216 03:05:44.026903  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:46.526864  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:42.935061  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:44.945574  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 16 03:05:36 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:36.144461324Z" level=info msg="Started container" PID=1886 containerID=f080def47a3b7038b6c43c26d3ea8a50557ae098ec5e4670e45b4c2394a7ec75 description=kube-system/storage-provisioner/storage-provisioner id=f5b5f93c-5938-4638-adbe-d2159b04d8d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=450348662e953c61149f81d98a2db5cb7bd014f59c757a22c989d6a056945e73
	Dec 16 03:05:36 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:36.145572731Z" level=info msg="Started container" PID=1887 containerID=27428646a9b70bd7b331796fa73a077fd10be38367a63b5afea6d9657ddd16e6 description=kube-system/coredns-66bc5c9577-xndlx/coredns id=4a2ef13f-c4bb-48c3-bda3-88a9a0c8fa63 name=/runtime.v1.RuntimeService/StartContainer sandboxID=851faa734e9a0f5e42e7f223b15c0461e0ed1dce74e075e72e1b07e3379bee34
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.382494176Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7ad002eb-aa1c-411d-83fc-123fa4367872 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.382604366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.388066373Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:29acde487f04debfac74ab1f524a591fe135830c5e524dea619e8257c0ec8648 UID:82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33 NetNS:/var/run/netns/91b4f601-d5e3-41a2-898e-a07ba439268d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002d2890}] Aliases:map[]}"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.388204236Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.399182662Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:29acde487f04debfac74ab1f524a591fe135830c5e524dea619e8257c0ec8648 UID:82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33 NetNS:/var/run/netns/91b4f601-d5e3-41a2-898e-a07ba439268d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002d2890}] Aliases:map[]}"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.399333453Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.400120372Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.401060061Z" level=info msg="Ran pod sandbox 29acde487f04debfac74ab1f524a591fe135830c5e524dea619e8257c0ec8648 with infra container: default/busybox/POD" id=7ad002eb-aa1c-411d-83fc-123fa4367872 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.402277666Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5f16c8bf-c941-4502-8d37-38a3f07ad051 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.40238181Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5f16c8bf-c941-4502-8d37-38a3f07ad051 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.402411684Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5f16c8bf-c941-4502-8d37-38a3f07ad051 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.403248505Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4a0ad48-77da-42fc-b0e4-e053e3349257 name=/runtime.v1.ImageService/PullImage
	Dec 16 03:05:39 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:39.408422397Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.726088717Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b4a0ad48-77da-42fc-b0e4-e053e3349257 name=/runtime.v1.ImageService/PullImage
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.726948279Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=47d78468-e187-49df-8f59-d1282b1f4fea name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.728363845Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8636821d-cec4-4d3a-944d-012f3889b53f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.731759841Z" level=info msg="Creating container: default/busybox/busybox" id=1ea7d891-fe69-42d3-8bc6-6fc9890c07b3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.731955015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.736629905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.737069802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.760665086Z" level=info msg="Created container 6bdd425f3b2c9d8e192bd18b85d59cce76088eee3b13973d4f61bdc98cbc738b: default/busybox/busybox" id=1ea7d891-fe69-42d3-8bc6-6fc9890c07b3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.7613252Z" level=info msg="Starting container: 6bdd425f3b2c9d8e192bd18b85d59cce76088eee3b13973d4f61bdc98cbc738b" id=768e335d-9547-4c99-8862-5bf3004c2197 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:40 default-k8s-diff-port-079165 crio[782]: time="2025-12-16T03:05:40.762986302Z" level=info msg="Started container" PID=1963 containerID=6bdd425f3b2c9d8e192bd18b85d59cce76088eee3b13973d4f61bdc98cbc738b description=default/busybox/busybox id=768e335d-9547-4c99-8862-5bf3004c2197 name=/runtime.v1.RuntimeService/StartContainer sandboxID=29acde487f04debfac74ab1f524a591fe135830c5e524dea619e8257c0ec8648
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	6bdd425f3b2c9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   29acde487f04d       busybox                                                default
	27428646a9b70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   851faa734e9a0       coredns-66bc5c9577-xndlx                               kube-system
	f080def47a3b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   450348662e953       storage-provisioner                                    kube-system
	c20c9c4e6e326       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   8f8dbb6096db6       kube-proxy-2g6tn                                       kube-system
	f5ae75609a9ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   466b64145c1c5       kindnet-w5gmn                                          kube-system
	bd7a05414e673       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   d73a609f3d93d       kube-controller-manager-default-k8s-diff-port-079165   kube-system
	c8eabcdd4e581       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   5a7baae4ba740       kube-apiserver-default-k8s-diff-port-079165            kube-system
	3d66656aab181       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   2cabd31eb01cf       etcd-default-k8s-diff-port-079165                      kube-system
	2df484dbff906       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   ae88563681311       kube-scheduler-default-k8s-diff-port-079165            kube-system
	
	
	==> coredns [27428646a9b70bd7b331796fa73a077fd10be38367a63b5afea6d9657ddd16e6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42078 - 13006 "HINFO IN 6734529072151345790.2308347574030180033. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048605336s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=default-k8s-diff-port-079165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:05:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:40 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:40 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:40 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:05:40 +0000   Tue, 16 Dec 2025 03:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-079165
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                67cf8032-f343-4067-841b-e5dc637b7a61
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-xndlx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-079165                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-w5gmn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079165             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079165    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-2g6tn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079165             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-079165 event: Registered Node default-k8s-diff-port-079165 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-079165 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [3d66656aab1810647b42a3ec80193b36b6f29c977c4699f9e7407dc973589da8] <==
	{"level":"warn","ts":"2025-12-16T03:05:16.102850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.109327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.116693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.129668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.136802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.144778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.152955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.161334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.168788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.177287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.184845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.193986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.202139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.209983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.218473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.225805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.233871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.250075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.253864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.264950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:16.272226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41346","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:05:39.026476Z","caller":"traceutil/trace.go:172","msg":"trace[538036191] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:432; }","duration":"104.098079ms","start":"2025-12-16T03:05:38.922347Z","end":"2025-12-16T03:05:39.026445Z","steps":["trace[538036191] 'read index received'  (duration: 104.087514ms)","trace[538036191] 'applied index is now lower than readState.Index'  (duration: 8.636µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:05:39.026627Z","caller":"traceutil/trace.go:172","msg":"trace[580878772] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"114.595269ms","start":"2025-12-16T03:05:38.912017Z","end":"2025-12-16T03:05:39.026612Z","steps":["trace[580878772] 'process raft request'  (duration: 114.472838ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:05:39.026688Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.305738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T03:05:39.026770Z","caller":"traceutil/trace.go:172","msg":"trace[779939756] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:419; }","duration":"104.423538ms","start":"2025-12-16T03:05:38.922337Z","end":"2025-12-16T03:05:39.026761Z","steps":["trace[779939756] 'agreement among raft nodes before linearized reading'  (duration: 104.239691ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:05:48 up 48 min,  0 user,  load average: 5.05, 3.10, 1.96
	Linux default-k8s-diff-port-079165 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f5ae75609a9ba304318fe7b3a855f486fb7719ee725ffdc298514ccda7846803] <==
	I1216 03:05:25.312160       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:05:25.312737       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 03:05:25.312988       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:05:25.313029       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:05:25.313063       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:05:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:05:25.610361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:05:25.610394       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:05:25.610404       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:05:25.610866       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:05:25.910476       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:05:25.911571       1 metrics.go:72] Registering metrics
	I1216 03:05:25.911662       1 controller.go:711] "Syncing nftables rules"
	I1216 03:05:35.611119       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:05:35.611183       1 main.go:301] handling current node
	I1216 03:05:45.610985       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:05:45.611015       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c8eabcdd4e5814b8b78795bd6a82ce71ab4b3f73394ed91f065f7cb411e55650] <==
	I1216 03:05:16.960803       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:05:16.967626       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:05:16.967762       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:05:16.967805       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:05:16.967862       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:16.967901       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:17.139086       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:17.852084       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 03:05:17.859526       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 03:05:17.859630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:05:18.373768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:18.411445       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:18.452130       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 03:05:18.458005       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1216 03:05:18.459219       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:05:18.463022       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:05:18.899506       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:05:19.708229       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:05:19.729249       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 03:05:19.757519       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:05:24.604026       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:05:24.651109       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 03:05:24.759652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:05:24.764390       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1216 03:05:47.007071       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:37714: use of closed network connection
	
	
	==> kube-controller-manager [bd7a05414e673b87149050ddd8214abfc60b224d9b2b945955a30eae68258e24] <==
	I1216 03:05:23.899017       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:05:23.899077       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:05:23.899077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 03:05:23.899078       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 03:05:23.899167       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 03:05:23.899243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:05:23.900281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 03:05:23.900292       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 03:05:23.900387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 03:05:23.900407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 03:05:23.901545       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:05:23.901566       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 03:05:23.901684       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:05:23.901993       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:05:23.902583       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:05:23.905101       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:05:23.905145       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 03:05:23.905193       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:05:23.905244       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:05:23.905255       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:05:23.905261       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:05:23.905267       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 03:05:23.912723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-079165" podCIDRs=["10.244.0.0/24"]
	I1216 03:05:23.920025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:05:38.849023       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c20c9c4e6e3266a49080bd653779edce1d73b673c3284e75faa6bffa28c310c7] <==
	I1216 03:05:25.100539       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:05:25.171896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:05:25.272874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:05:25.272914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 03:05:25.273045       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:05:25.316124       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:05:25.316191       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:05:25.323624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:05:25.324670       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:05:25.324733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:25.327347       1 config.go:200] "Starting service config controller"
	I1216 03:05:25.327418       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:05:25.327486       1 config.go:309] "Starting node config controller"
	I1216 03:05:25.327500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:05:25.328954       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:05:25.328978       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:05:25.329260       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:05:25.329434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:05:25.427631       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:05:25.427638       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:05:25.430173       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:05:25.430419       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2df484dbff90648a28feac266c6b039adee3833a9a08034895426089cf31907d] <==
	E1216 03:05:16.922062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 03:05:16.922101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 03:05:16.922128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 03:05:16.922119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 03:05:16.922176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 03:05:16.922294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 03:05:16.922506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 03:05:16.922453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 03:05:16.922433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 03:05:16.922639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 03:05:16.922690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 03:05:16.922714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 03:05:16.922426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 03:05:17.792989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 03:05:17.822709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 03:05:17.990778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 03:05:17.998866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 03:05:18.039467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 03:05:18.047720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 03:05:18.051870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 03:05:18.056121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 03:05:18.092691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 03:05:18.203508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 03:05:18.215743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1216 03:05:19.818409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:05:20 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:20.659284    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-079165" podStartSLOduration=1.6592575489999999 podStartE2EDuration="1.659257549s" podCreationTimestamp="2025-12-16 03:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:20.649883514 +0000 UTC m=+1.172411629" watchObservedRunningTime="2025-12-16 03:05:20.659257549 +0000 UTC m=+1.181785669"
	Dec 16 03:05:20 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:20.679724    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-079165" podStartSLOduration=1.679703044 podStartE2EDuration="1.679703044s" podCreationTimestamp="2025-12-16 03:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:20.659550026 +0000 UTC m=+1.182078146" watchObservedRunningTime="2025-12-16 03:05:20.679703044 +0000 UTC m=+1.202231161"
	Dec 16 03:05:20 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:20.697300    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-079165" podStartSLOduration=1.6972818859999999 podStartE2EDuration="1.697281886s" podCreationTimestamp="2025-12-16 03:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:20.682012501 +0000 UTC m=+1.204540621" watchObservedRunningTime="2025-12-16 03:05:20.697281886 +0000 UTC m=+1.219810006"
	Dec 16 03:05:20 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:20.708210    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-079165" podStartSLOduration=1.7081901290000001 podStartE2EDuration="1.708190129s" podCreationTimestamp="2025-12-16 03:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:20.697484557 +0000 UTC m=+1.220012669" watchObservedRunningTime="2025-12-16 03:05:20.708190129 +0000 UTC m=+1.230718249"
	Dec 16 03:05:23 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:23.954912    1355 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 03:05:23 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:23.955674    1355 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715471    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/337c0564-5086-43df-acb3-ba8fab73b162-cni-cfg\") pod \"kindnet-w5gmn\" (UID: \"337c0564-5086-43df-acb3-ba8fab73b162\") " pod="kube-system/kindnet-w5gmn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715530    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a7d928-6294-47f7-a0e0-c4ccdfd04917-xtables-lock\") pod \"kube-proxy-2g6tn\" (UID: \"92a7d928-6294-47f7-a0e0-c4ccdfd04917\") " pod="kube-system/kube-proxy-2g6tn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715569    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a7d928-6294-47f7-a0e0-c4ccdfd04917-lib-modules\") pod \"kube-proxy-2g6tn\" (UID: \"92a7d928-6294-47f7-a0e0-c4ccdfd04917\") " pod="kube-system/kube-proxy-2g6tn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715594    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757qj\" (UniqueName: \"kubernetes.io/projected/92a7d928-6294-47f7-a0e0-c4ccdfd04917-kube-api-access-757qj\") pod \"kube-proxy-2g6tn\" (UID: \"92a7d928-6294-47f7-a0e0-c4ccdfd04917\") " pod="kube-system/kube-proxy-2g6tn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715630    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/337c0564-5086-43df-acb3-ba8fab73b162-xtables-lock\") pod \"kindnet-w5gmn\" (UID: \"337c0564-5086-43df-acb3-ba8fab73b162\") " pod="kube-system/kindnet-w5gmn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715651    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/337c0564-5086-43df-acb3-ba8fab73b162-lib-modules\") pod \"kindnet-w5gmn\" (UID: \"337c0564-5086-43df-acb3-ba8fab73b162\") " pod="kube-system/kindnet-w5gmn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715678    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ph8\" (UniqueName: \"kubernetes.io/projected/337c0564-5086-43df-acb3-ba8fab73b162-kube-api-access-c8ph8\") pod \"kindnet-w5gmn\" (UID: \"337c0564-5086-43df-acb3-ba8fab73b162\") " pod="kube-system/kindnet-w5gmn"
	Dec 16 03:05:24 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:24.715699    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92a7d928-6294-47f7-a0e0-c4ccdfd04917-kube-proxy\") pod \"kube-proxy-2g6tn\" (UID: \"92a7d928-6294-47f7-a0e0-c4ccdfd04917\") " pod="kube-system/kube-proxy-2g6tn"
	Dec 16 03:05:25 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:25.670430    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w5gmn" podStartSLOduration=1.670403227 podStartE2EDuration="1.670403227s" podCreationTimestamp="2025-12-16 03:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:25.67020169 +0000 UTC m=+6.192729812" watchObservedRunningTime="2025-12-16 03:05:25.670403227 +0000 UTC m=+6.192931347"
	Dec 16 03:05:25 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:25.670571    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2g6tn" podStartSLOduration=1.670562356 podStartE2EDuration="1.670562356s" podCreationTimestamp="2025-12-16 03:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:25.655954169 +0000 UTC m=+6.178482288" watchObservedRunningTime="2025-12-16 03:05:25.670562356 +0000 UTC m=+6.193090475"
	Dec 16 03:05:35 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:35.751583    1355 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 03:05:35 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:35.891515    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/373e93d7-33a0-47d5-b35a-cff7f427ea82-tmp\") pod \"storage-provisioner\" (UID: \"373e93d7-33a0-47d5-b35a-cff7f427ea82\") " pod="kube-system/storage-provisioner"
	Dec 16 03:05:35 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:35.891580    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504929d7-3899-44dd-a269-0a4b9f7c3e2a-config-volume\") pod \"coredns-66bc5c9577-xndlx\" (UID: \"504929d7-3899-44dd-a269-0a4b9f7c3e2a\") " pod="kube-system/coredns-66bc5c9577-xndlx"
	Dec 16 03:05:35 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:35.891608    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqxx9\" (UniqueName: \"kubernetes.io/projected/504929d7-3899-44dd-a269-0a4b9f7c3e2a-kube-api-access-wqxx9\") pod \"coredns-66bc5c9577-xndlx\" (UID: \"504929d7-3899-44dd-a269-0a4b9f7c3e2a\") " pod="kube-system/coredns-66bc5c9577-xndlx"
	Dec 16 03:05:35 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:35.891685    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2g9n\" (UniqueName: \"kubernetes.io/projected/373e93d7-33a0-47d5-b35a-cff7f427ea82-kube-api-access-m2g9n\") pod \"storage-provisioner\" (UID: \"373e93d7-33a0-47d5-b35a-cff7f427ea82\") " pod="kube-system/storage-provisioner"
	Dec 16 03:05:36 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:36.680926    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.680903105 podStartE2EDuration="11.680903105s" podCreationTimestamp="2025-12-16 03:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:36.66990357 +0000 UTC m=+17.192431709" watchObservedRunningTime="2025-12-16 03:05:36.680903105 +0000 UTC m=+17.203431224"
	Dec 16 03:05:39 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:39.028192    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xndlx" podStartSLOduration=14.028161603 podStartE2EDuration="14.028161603s" podCreationTimestamp="2025-12-16 03:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:36.681070383 +0000 UTC m=+17.203598515" watchObservedRunningTime="2025-12-16 03:05:39.028161603 +0000 UTC m=+19.550689725"
	Dec 16 03:05:39 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:39.109926    1355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkts5\" (UniqueName: \"kubernetes.io/projected/82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33-kube-api-access-pkts5\") pod \"busybox\" (UID: \"82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33\") " pod="default/busybox"
	Dec 16 03:05:41 default-k8s-diff-port-079165 kubelet[1355]: I1216 03:05:41.691245    1355 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.3661734340000002 podStartE2EDuration="3.691221715s" podCreationTimestamp="2025-12-16 03:05:38 +0000 UTC" firstStartedPulling="2025-12-16 03:05:39.402725359 +0000 UTC m=+19.925253471" lastFinishedPulling="2025-12-16 03:05:40.727773639 +0000 UTC m=+21.250301752" observedRunningTime="2025-12-16 03:05:41.691035927 +0000 UTC m=+22.213564048" watchObservedRunningTime="2025-12-16 03:05:41.691221715 +0000 UTC m=+22.213749834"
	
	
	==> storage-provisioner [f080def47a3b7038b6c43c26d3ea8a50557ae098ec5e4670e45b4c2394a7ec75] <==
	I1216 03:05:36.157983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:05:36.168028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:05:36.168146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:05:36.170394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:36.175048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:05:36.175228       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:05:36.175469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_e293287d-7581-417d-a6d4-2c56e28230e1!
	I1216 03:05:36.175672       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41786d2c-b62a-4752-9d3d-2698b61108be", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079165_e293287d-7581-417d-a6d4-2c56e28230e1 became leader
	W1216 03:05:36.180246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:36.192902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:05:36.276315       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_e293287d-7581-417d-a6d4-2c56e28230e1!
	W1216 03:05:38.196414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:38.201327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:40.204751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:40.208802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:42.212453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:42.217111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:44.220924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:44.276264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:46.279278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:46.283153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:48.286201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:48.291530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.285835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-991316
helpers_test.go:244: (dbg) docker inspect newest-cni-991316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	        "Created": "2025-12-16T03:05:44.429433316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:44.467969541Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hosts",
	        "LogPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d-json.log",
	        "Name": "/newest-cni-991316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-991316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-991316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	                "LowerDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-991316",
	                "Source": "/var/lib/docker/volumes/newest-cni-991316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-991316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-991316",
	                "name.minikube.sigs.k8s.io": "newest-cni-991316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c9b4ef0d12240f4eb1e200407a5bb8bd988774004f9bd516265a63df4e898564",
	            "SandboxKey": "/var/run/docker/netns/c9b4ef0d1224",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-991316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5f2a89125abbce7b9991af7d91b2faefd2ac42de4f13e650434f1e7fd46fcce",
	                    "EndpointID": "50b53ded40e87b68e52fcc26771ee4e9e46d5c5134a74171b09b5216c63082f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:ca:c0:14:b6:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-991316",
	                        "4f4fbbe06579"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25: (1.034914804s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p cilium-646016 sudo systemctl cat crio --no-pager                                                                                                                                                                                                  │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ ssh     │ -p cilium-646016 sudo crio config                                                                                                                                                                                                                    │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p cilium-646016                                                                                                                                                                                                                                     │ cilium-646016                │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                              │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                               │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                            │ running-upgrade-146373       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-307185 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:05:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:05:39.730598  291579 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:05:39.730725  291579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:05:39.730731  291579 out.go:374] Setting ErrFile to fd 2...
	I1216 03:05:39.730749  291579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:05:39.730953  291579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:05:39.731410  291579 out.go:368] Setting JSON to false
	I1216 03:05:39.732635  291579 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2892,"bootTime":1765851448,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:05:39.732698  291579 start.go:143] virtualization: kvm guest
	I1216 03:05:39.734754  291579 out.go:179] * [newest-cni-991316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:05:39.736080  291579 notify.go:221] Checking for updates...
	I1216 03:05:39.736091  291579 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:05:39.737511  291579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:05:39.738869  291579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:05:39.740112  291579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:05:39.741285  291579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:05:39.742451  291579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:05:39.744149  291579 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:05:39.744246  291579 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:05:39.744336  291579 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:05:39.744422  291579 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:05:39.768376  291579 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:05:39.768483  291579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:05:39.824781  291579 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:05:39.815112471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:05:39.824899  291579 docker.go:319] overlay module found
	I1216 03:05:39.826778  291579 out.go:179] * Using the docker driver based on user configuration
	I1216 03:05:39.828053  291579 start.go:309] selected driver: docker
	I1216 03:05:39.828068  291579 start.go:927] validating driver "docker" against <nil>
	I1216 03:05:39.828085  291579 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:05:39.828728  291579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:05:39.884616  291579 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:05:39.874438346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:05:39.884754  291579 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 03:05:39.884776  291579 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 03:05:39.885031  291579 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:05:39.887314  291579 out.go:179] * Using Docker driver with root privileges
	I1216 03:05:39.888645  291579 cni.go:84] Creating CNI manager for ""
	I1216 03:05:39.888721  291579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:05:39.888734  291579 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:05:39.888847  291579 start.go:353] cluster config:
	{Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:05:39.890796  291579 out.go:179] * Starting "newest-cni-991316" primary control-plane node in "newest-cni-991316" cluster
	I1216 03:05:39.891939  291579 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:05:39.893260  291579 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:05:39.894477  291579 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:05:39.894512  291579 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 03:05:39.894534  291579 cache.go:65] Caching tarball of preloaded images
	I1216 03:05:39.894583  291579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:05:39.894660  291579 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:05:39.894676  291579 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 03:05:39.894859  291579 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/config.json ...
	I1216 03:05:39.894900  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/config.json: {Name:mk53b86e54227a82ea407295920ea6a951713ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:39.916003  291579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:05:39.916019  291579 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:05:39.916037  291579 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:05:39.916070  291579 start.go:360] acquireMachinesLock for newest-cni-991316: {Name:mk9391ad712ba901fa12a9274aabaadfeece5f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:05:39.916173  291579 start.go:364] duration metric: took 82.807µs to acquireMachinesLock for "newest-cni-991316"
	I1216 03:05:39.916200  291579 start.go:93] Provisioning new machine with config: &{Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:05:39.916296  291579 start.go:125] createHost starting for "" (driver="docker")
	W1216 03:05:37.027562  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:39.526470  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:41.527386  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:38.436055  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:40.437038  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	I1216 03:05:39.918710  291579 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:05:39.919028  291579 start.go:159] libmachine.API.Create for "newest-cni-991316" (driver="docker")
	I1216 03:05:39.919064  291579 client.go:173] LocalClient.Create starting
	I1216 03:05:39.919140  291579 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:05:39.919186  291579 main.go:143] libmachine: Decoding PEM data...
	I1216 03:05:39.919216  291579 main.go:143] libmachine: Parsing certificate...
	I1216 03:05:39.919308  291579 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:05:39.919340  291579 main.go:143] libmachine: Decoding PEM data...
	I1216 03:05:39.919357  291579 main.go:143] libmachine: Parsing certificate...
	I1216 03:05:39.919786  291579 cli_runner.go:164] Run: docker network inspect newest-cni-991316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:05:39.938120  291579 cli_runner.go:211] docker network inspect newest-cni-991316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:05:39.938198  291579 network_create.go:284] running [docker network inspect newest-cni-991316] to gather additional debugging logs...
	I1216 03:05:39.938224  291579 cli_runner.go:164] Run: docker network inspect newest-cni-991316
	W1216 03:05:39.954877  291579 cli_runner.go:211] docker network inspect newest-cni-991316 returned with exit code 1
	I1216 03:05:39.954901  291579 network_create.go:287] error running [docker network inspect newest-cni-991316]: docker network inspect newest-cni-991316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-991316 not found
	I1216 03:05:39.954921  291579 network_create.go:289] output of [docker network inspect newest-cni-991316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-991316 not found
	
	** /stderr **
	I1216 03:05:39.955014  291579 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:05:39.971958  291579 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:05:39.972619  291579 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:05:39.973518  291579 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:05:39.974428  291579 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f2c240}
	I1216 03:05:39.974454  291579 network_create.go:124] attempt to create docker network newest-cni-991316 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:05:39.974502  291579 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-991316 newest-cni-991316
	I1216 03:05:40.025935  291579 network_create.go:108] docker network newest-cni-991316 192.168.76.0/24 created
	I1216 03:05:40.025965  291579 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-991316" container
	I1216 03:05:40.026048  291579 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:05:40.044058  291579 cli_runner.go:164] Run: docker volume create newest-cni-991316 --label name.minikube.sigs.k8s.io=newest-cni-991316 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:05:40.062349  291579 oci.go:103] Successfully created a docker volume newest-cni-991316
	I1216 03:05:40.062415  291579 cli_runner.go:164] Run: docker run --rm --name newest-cni-991316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-991316 --entrypoint /usr/bin/test -v newest-cni-991316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:05:40.477363  291579 oci.go:107] Successfully prepared a docker volume newest-cni-991316
	I1216 03:05:40.477552  291579 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:05:40.477571  291579 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:05:40.477627  291579 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-991316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:05:44.358059  291579 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-991316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.880387316s)
	I1216 03:05:44.358093  291579 kic.go:203] duration metric: took 3.88051827s to extract preloaded images to volume ...
	W1216 03:05:44.358191  291579 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:05:44.358247  291579 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:05:44.358295  291579 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:05:44.413162  291579 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-991316 --name newest-cni-991316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-991316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-991316 --network newest-cni-991316 --ip 192.168.76.2 --volume newest-cni-991316:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:05:44.689560  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Running}}
	I1216 03:05:44.709263  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:05:44.728364  291579 cli_runner.go:164] Run: docker exec newest-cni-991316 stat /var/lib/dpkg/alternatives/iptables
	W1216 03:05:44.026903  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:46.526864  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:42.935061  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:44.945574  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	I1216 03:05:44.775114  291579 oci.go:144] the created container "newest-cni-991316" has a running status.
	I1216 03:05:44.775144  291579 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa...
	I1216 03:05:44.831144  291579 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:05:44.860188  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:05:44.878522  291579 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:05:44.878541  291579 kic_runner.go:114] Args: [docker exec --privileged newest-cni-991316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:05:44.924201  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:05:44.949813  291579 machine.go:94] provisionDockerMachine start ...
	I1216 03:05:44.949998  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:44.970506  291579 main.go:143] libmachine: Using SSH client type: native
	I1216 03:05:44.970796  291579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1216 03:05:44.970839  291579 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:05:44.971575  291579 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59346->127.0.0.1:33083: read: connection reset by peer
	I1216 03:05:48.111467  291579 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-991316
	
	I1216 03:05:48.111492  291579 ubuntu.go:182] provisioning hostname "newest-cni-991316"
	I1216 03:05:48.111559  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:48.131918  291579 main.go:143] libmachine: Using SSH client type: native
	I1216 03:05:48.132135  291579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1216 03:05:48.132148  291579 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-991316 && echo "newest-cni-991316" | sudo tee /etc/hostname
	I1216 03:05:48.284182  291579 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-991316
	
	I1216 03:05:48.284267  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:48.306052  291579 main.go:143] libmachine: Using SSH client type: native
	I1216 03:05:48.306325  291579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1216 03:05:48.306353  291579 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-991316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-991316/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-991316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:05:48.448561  291579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:05:48.448589  291579 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:05:48.448630  291579 ubuntu.go:190] setting up certificates
	I1216 03:05:48.448652  291579 provision.go:84] configureAuth start
	I1216 03:05:48.448714  291579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-991316
	I1216 03:05:48.469621  291579 provision.go:143] copyHostCerts
	I1216 03:05:48.469670  291579 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:05:48.469679  291579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:05:48.469739  291579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:05:48.469813  291579 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:05:48.469836  291579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:05:48.469873  291579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:05:48.469945  291579 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:05:48.469953  291579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:05:48.469979  291579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:05:48.470038  291579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.newest-cni-991316 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-991316]
	I1216 03:05:48.574336  291579 provision.go:177] copyRemoteCerts
	I1216 03:05:48.574423  291579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:05:48.574472  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:48.595542  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:05:48.697328  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:05:48.718018  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:05:48.737930  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:05:48.759714  291579 provision.go:87] duration metric: took 311.047889ms to configureAuth
	I1216 03:05:48.759746  291579 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:05:48.759966  291579 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:05:48.760087  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:48.781489  291579 main.go:143] libmachine: Using SSH client type: native
	I1216 03:05:48.781793  291579 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1216 03:05:48.781842  291579 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:05:49.085788  291579 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:05:49.085813  291579 machine.go:97] duration metric: took 4.135934839s to provisionDockerMachine
	I1216 03:05:49.085879  291579 client.go:176] duration metric: took 9.166806973s to LocalClient.Create
	I1216 03:05:49.085905  291579 start.go:167] duration metric: took 9.166877792s to libmachine.API.Create "newest-cni-991316"
	I1216 03:05:49.085916  291579 start.go:293] postStartSetup for "newest-cni-991316" (driver="docker")
	I1216 03:05:49.085932  291579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:05:49.085995  291579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:05:49.086042  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:49.106537  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:05:49.212303  291579 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:05:49.216172  291579 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:05:49.216203  291579 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:05:49.216213  291579 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:05:49.216264  291579 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:05:49.216361  291579 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:05:49.216477  291579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:05:49.224134  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:05:49.244360  291579 start.go:296] duration metric: took 158.43109ms for postStartSetup
	I1216 03:05:49.244670  291579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-991316
	I1216 03:05:49.263766  291579 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/config.json ...
	I1216 03:05:49.264087  291579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:05:49.264143  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:49.282899  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:05:49.381222  291579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:05:49.386385  291579 start.go:128] duration metric: took 9.470072117s to createHost
	I1216 03:05:49.386418  291579 start.go:83] releasing machines lock for "newest-cni-991316", held for 9.470230975s
	I1216 03:05:49.386473  291579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-991316
	I1216 03:05:49.407917  291579 ssh_runner.go:195] Run: cat /version.json
	I1216 03:05:49.407962  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:49.408005  291579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:05:49.408093  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:05:49.427263  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:05:49.429802  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:05:49.580308  291579 ssh_runner.go:195] Run: systemctl --version
	I1216 03:05:49.587319  291579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:05:49.623466  291579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:05:49.628339  291579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:05:49.628410  291579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:05:49.654872  291579 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:05:49.654895  291579 start.go:496] detecting cgroup driver to use...
	I1216 03:05:49.654930  291579 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:05:49.654983  291579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:05:49.670783  291579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:05:49.683128  291579 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:05:49.683179  291579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:05:49.699158  291579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:05:49.716222  291579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:05:49.793060  291579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:05:49.876487  291579 docker.go:234] disabling docker service ...
	I1216 03:05:49.876543  291579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:05:49.895305  291579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:05:49.907960  291579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:05:49.987312  291579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:05:50.069637  291579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:05:50.082245  291579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:05:50.096547  291579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:05:50.096609  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.107470  291579 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:05:50.107583  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.116870  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.125734  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.134489  291579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:05:50.142557  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.151136  291579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.165198  291579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:05:50.174364  291579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:05:50.182224  291579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:05:50.189999  291579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:05:50.264906  291579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:05:50.403907  291579 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:05:50.403989  291579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:05:50.408143  291579 start.go:564] Will wait 60s for crictl version
	I1216 03:05:50.408215  291579 ssh_runner.go:195] Run: which crictl
	I1216 03:05:50.412256  291579 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:05:50.440804  291579 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:05:50.440903  291579 ssh_runner.go:195] Run: crio --version
	I1216 03:05:50.472051  291579 ssh_runner.go:195] Run: crio --version
	I1216 03:05:50.504937  291579 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 03:05:50.506235  291579 cli_runner.go:164] Run: docker network inspect newest-cni-991316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:05:50.525755  291579 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:05:50.530227  291579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:05:50.542800  291579 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 03:05:50.543992  291579 kubeadm.go:884] updating cluster {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:05:50.544105  291579 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:05:50.544163  291579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:05:50.579557  291579 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:05:50.579579  291579 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:05:50.579633  291579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:05:50.607893  291579 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:05:50.607915  291579 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:05:50.607925  291579 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 03:05:50.608031  291579 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-991316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:05:50.608115  291579 ssh_runner.go:195] Run: crio config
	I1216 03:05:50.659156  291579 cni.go:84] Creating CNI manager for ""
	I1216 03:05:50.659175  291579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:05:50.659195  291579 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 03:05:50.659223  291579 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-991316 NodeName:newest-cni-991316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:05:50.659349  291579 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-991316"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:05:50.659413  291579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:05:50.668401  291579 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:05:50.668454  291579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:05:50.677316  291579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 03:05:50.690576  291579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:05:50.707328  291579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 03:05:50.722191  291579 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:05:50.725882  291579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:05:50.736151  291579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:05:50.815555  291579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:05:50.841303  291579 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316 for IP: 192.168.76.2
	I1216 03:05:50.841325  291579 certs.go:195] generating shared ca certs ...
	I1216 03:05:50.841347  291579 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:50.841492  291579 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:05:50.841534  291579 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:05:50.841544  291579 certs.go:257] generating profile certs ...
	I1216 03:05:50.841592  291579 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.key
	I1216 03:05:50.841618  291579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.crt with IP's: []
	I1216 03:05:51.003356  291579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.crt ...
	I1216 03:05:51.003390  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.crt: {Name:mk46d92bf37ae4905b60026134e1f5926c9c5a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.003561  291579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.key ...
	I1216 03:05:51.003574  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.key: {Name:mk9b38ba20ed56e2987b06a873a8a243aaa43e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.003669  291579 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275
	I1216 03:05:51.003685  291579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt.4c5ce275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:05:51.046728  291579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt.4c5ce275 ...
	I1216 03:05:51.046751  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt.4c5ce275: {Name:mk2a91d8016552f2ab4f702cde3ac33ca0f31f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.046945  291579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275 ...
	I1216 03:05:51.046968  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275: {Name:mk56854cc1ed774d394ada05fcfd5f8b2c394f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.047056  291579 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt.4c5ce275 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt
	I1216 03:05:51.047138  291579 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key
	I1216 03:05:51.047198  291579 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key
	I1216 03:05:51.047214  291579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt with IP's: []
	I1216 03:05:51.212596  291579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt ...
	I1216 03:05:51.212628  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt: {Name:mk6fb1c02fd947b63644cc12c249b2dde18e4b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.212778  291579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key ...
	I1216 03:05:51.212790  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key: {Name:mk26f77d0991113611945bba421867dc85396e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:05:51.212983  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:05:51.213024  291579 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:05:51.213034  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:05:51.213059  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:05:51.213083  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:05:51.213106  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:05:51.213152  291579 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:05:51.213716  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:05:51.232435  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:05:51.249525  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:05:51.267312  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:05:51.285019  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:05:51.304038  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:05:51.322488  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:05:51.341529  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:05:51.359282  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:05:51.379651  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:05:51.397880  291579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:05:51.416669  291579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:05:51.429764  291579 ssh_runner.go:195] Run: openssl version
	I1216 03:05:51.437582  291579 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:05:51.445385  291579 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:05:51.453497  291579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:05:51.457219  291579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:05:51.457277  291579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:05:51.491738  291579 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:05:51.499566  291579 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:05:51.507245  291579 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:05:51.514978  291579 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:05:51.522461  291579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:05:51.527275  291579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:05:51.527337  291579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:05:51.561247  291579 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:05:51.570136  291579 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:05:51.578223  291579 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:05:51.586187  291579 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:05:51.593939  291579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:05:51.598446  291579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:05:51.598494  291579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:05:51.639971  291579 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:05:51.647895  291579 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:05:51.655808  291579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:05:51.659908  291579 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:05:51.659974  291579 kubeadm.go:401] StartCluster: {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:05:51.660052  291579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:05:51.660108  291579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:05:51.686152  291579 cri.go:89] found id: ""
	I1216 03:05:51.686231  291579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:05:51.694447  291579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:05:51.702705  291579 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:05:51.702767  291579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:05:51.710833  291579 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:05:51.710853  291579 kubeadm.go:158] found existing configuration files:
	
	I1216 03:05:51.710908  291579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:05:51.718888  291579 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:05:51.718962  291579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:05:51.726985  291579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:05:51.734831  291579 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:05:51.734902  291579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:05:51.742543  291579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:05:51.750690  291579 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:05:51.750752  291579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:05:51.757880  291579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:05:51.765319  291579 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:05:51.765373  291579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:05:51.773155  291579 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:05:51.815909  291579 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:05:51.815997  291579 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:05:51.884494  291579 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:05:51.884580  291579 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:05:51.884624  291579 kubeadm.go:319] OS: Linux
	I1216 03:05:51.884724  291579 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:05:51.884784  291579 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:05:51.884878  291579 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:05:51.884950  291579 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:05:51.885012  291579 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:05:51.885065  291579 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:05:51.885125  291579 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:05:51.885184  291579 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:05:51.956584  291579 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:05:51.956710  291579 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:05:51.956841  291579 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:05:51.971865  291579 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 03:05:49.027757  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:51.527449  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:47.435688  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:49.435779  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	W1216 03:05:51.437080  283028 pod_ready.go:104] pod "coredns-5dd5756b68-8lk58" is not "Ready", error: <nil>
	I1216 03:05:51.935333  283028 pod_ready.go:94] pod "coredns-5dd5756b68-8lk58" is "Ready"
	I1216 03:05:51.935362  283028 pod_ready.go:86] duration metric: took 34.005759617s for pod "coredns-5dd5756b68-8lk58" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.938460  283028 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.942629  283028 pod_ready.go:94] pod "etcd-old-k8s-version-073001" is "Ready"
	I1216 03:05:51.942651  283028 pod_ready.go:86] duration metric: took 4.167543ms for pod "etcd-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.948132  283028 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.963924  283028 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-073001" is "Ready"
	I1216 03:05:51.963957  283028 pod_ready.go:86] duration metric: took 15.749244ms for pod "kube-apiserver-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.967730  283028 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:51.973749  291579 out.go:252]   - Generating certificates and keys ...
	I1216 03:05:51.973886  291579 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:05:51.973984  291579 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:05:52.137859  291579 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:05:52.173072  291579 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:05:52.229905  291579 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:05:52.255056  291579 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:05:52.344760  291579 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:05:52.344987  291579 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-991316] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:05:52.448572  291579 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:05:52.448705  291579 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-991316] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:05:52.638657  291579 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:05:52.742302  291579 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:05:52.856006  291579 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:05:52.856068  291579 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:05:52.904213  291579 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:05:52.983036  291579 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:05:53.047669  291579 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:05:53.118241  291579 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:05:53.190189  291579 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:05:53.191050  291579 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:05:53.194749  291579 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:05:52.132840  283028 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-073001" is "Ready"
	I1216 03:05:52.132871  283028 pod_ready.go:86] duration metric: took 164.987484ms for pod "kube-controller-manager-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:52.334206  283028 pod_ready.go:83] waiting for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:52.732798  283028 pod_ready.go:94] pod "kube-proxy-mhxd9" is "Ready"
	I1216 03:05:52.732836  283028 pod_ready.go:86] duration metric: took 398.606576ms for pod "kube-proxy-mhxd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:52.934432  283028 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:53.333532  283028 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-073001" is "Ready"
	I1216 03:05:53.333557  283028 pod_ready.go:86] duration metric: took 399.102254ms for pod "kube-scheduler-old-k8s-version-073001" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:53.333569  283028 pod_ready.go:40] duration metric: took 35.409930629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:05:53.381096  283028 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 03:05:53.383628  283028 out.go:203] 
	W1216 03:05:53.385158  283028 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 03:05:53.386441  283028 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 03:05:53.387760  283028 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-073001" cluster and "default" namespace by default
	I1216 03:05:53.196364  291579 out.go:252]   - Booting up control plane ...
	I1216 03:05:53.196487  291579 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:05:53.197084  291579 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:05:53.197980  291579 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:05:53.211645  291579 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:05:53.211796  291579 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:05:53.218637  291579 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:05:53.218970  291579 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:05:53.219043  291579 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:05:53.320549  291579 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:05:53.320702  291579 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:05:53.822194  291579 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.782689ms
	I1216 03:05:53.825445  291579 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:05:53.825598  291579 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:05:53.825714  291579 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:05:53.825801  291579 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1216 03:05:54.026523  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	W1216 03:05:56.026806  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	I1216 03:05:54.830880  291579 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005209424s
	I1216 03:05:55.510146  291579 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.684571606s
	I1216 03:05:57.326904  291579 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501348068s
	I1216 03:05:57.342563  291579 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:05:57.352335  291579 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:05:57.361778  291579 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:05:57.362021  291579 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-991316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:05:57.369954  291579 kubeadm.go:319] [bootstrap-token] Using token: madq5a.rltidf7isi01juev
	I1216 03:05:57.371256  291579 out.go:252]   - Configuring RBAC rules ...
	I1216 03:05:57.371413  291579 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:05:57.375176  291579 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:05:57.380212  291579 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:05:57.382691  291579 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:05:57.385306  291579 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:05:57.387502  291579 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:05:57.733686  291579 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:05:58.149545  291579 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:05:58.732951  291579 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:05:58.734230  291579 kubeadm.go:319] 
	I1216 03:05:58.734324  291579 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:05:58.734338  291579 kubeadm.go:319] 
	I1216 03:05:58.734441  291579 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:05:58.734450  291579 kubeadm.go:319] 
	I1216 03:05:58.734488  291579 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:05:58.734582  291579 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:05:58.734687  291579 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:05:58.734706  291579 kubeadm.go:319] 
	I1216 03:05:58.734782  291579 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:05:58.734791  291579 kubeadm.go:319] 
	I1216 03:05:58.734868  291579 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:05:58.734878  291579 kubeadm.go:319] 
	I1216 03:05:58.734956  291579 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:05:58.735060  291579 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:05:58.735167  291579 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:05:58.735215  291579 kubeadm.go:319] 
	I1216 03:05:58.735323  291579 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:05:58.735385  291579 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:05:58.735391  291579 kubeadm.go:319] 
	I1216 03:05:58.735468  291579 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token madq5a.rltidf7isi01juev \
	I1216 03:05:58.735579  291579 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:05:58.735600  291579 kubeadm.go:319] 	--control-plane 
	I1216 03:05:58.735603  291579 kubeadm.go:319] 
	I1216 03:05:58.735669  291579 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:05:58.735675  291579 kubeadm.go:319] 
	I1216 03:05:58.735764  291579 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token madq5a.rltidf7isi01juev \
	I1216 03:05:58.735954  291579 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:05:58.738429  291579 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:05:58.738542  291579 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:05:58.738574  291579 cni.go:84] Creating CNI manager for ""
	I1216 03:05:58.738596  291579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:05:58.740940  291579 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:05:58.742141  291579 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:05:58.746529  291579 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 03:05:58.746549  291579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:05:58.759145  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:05:58.970354  291579 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:05:58.970435  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:05:58.970462  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-991316 minikube.k8s.io/updated_at=2025_12_16T03_05_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=newest-cni-991316 minikube.k8s.io/primary=true
	I1216 03:05:58.982445  291579 ops.go:34] apiserver oom_adj: -16
	I1216 03:05:59.055524  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:05:59.556371  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1216 03:05:58.027914  284571 pod_ready.go:104] pod "coredns-7d764666f9-nm9bc" is not "Ready", error: <nil>
	I1216 03:05:58.526483  284571 pod_ready.go:94] pod "coredns-7d764666f9-nm9bc" is "Ready"
	I1216 03:05:58.526516  284571 pod_ready.go:86] duration metric: took 35.50521763s for pod "coredns-7d764666f9-nm9bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.528948  284571 pod_ready.go:83] waiting for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.532888  284571 pod_ready.go:94] pod "etcd-no-preload-307185" is "Ready"
	I1216 03:05:58.532925  284571 pod_ready.go:86] duration metric: took 3.95835ms for pod "etcd-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.535054  284571 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.538804  284571 pod_ready.go:94] pod "kube-apiserver-no-preload-307185" is "Ready"
	I1216 03:05:58.538839  284571 pod_ready.go:86] duration metric: took 3.763722ms for pod "kube-apiserver-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.540528  284571 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.724839  284571 pod_ready.go:94] pod "kube-controller-manager-no-preload-307185" is "Ready"
	I1216 03:05:58.724872  284571 pod_ready.go:86] duration metric: took 184.322268ms for pod "kube-controller-manager-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:58.924987  284571 pod_ready.go:83] waiting for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:59.325473  284571 pod_ready.go:94] pod "kube-proxy-tp2h2" is "Ready"
	I1216 03:05:59.325503  284571 pod_ready.go:86] duration metric: took 400.491017ms for pod "kube-proxy-tp2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:59.524900  284571 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:59.925006  284571 pod_ready.go:94] pod "kube-scheduler-no-preload-307185" is "Ready"
	I1216 03:05:59.925032  284571 pod_ready.go:86] duration metric: took 400.111004ms for pod "kube-scheduler-no-preload-307185" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:05:59.925043  284571 pod_ready.go:40] duration metric: took 36.906677154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:05:59.973310  284571 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:05:59.975035  284571 out.go:179] * Done! kubectl is now configured to use "no-preload-307185" cluster and "default" namespace by default
	I1216 03:06:00.055722  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:00.556138  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:01.056540  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:01.556628  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:02.055872  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:02.555987  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:03.055986  291579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:03.122239  291579 kubeadm.go:1114] duration metric: took 4.15186582s to wait for elevateKubeSystemPrivileges
	I1216 03:06:03.122268  291579 kubeadm.go:403] duration metric: took 11.462301317s to StartCluster
	I1216 03:06:03.122288  291579 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:03.122362  291579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:03.123753  291579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:03.123998  291579 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:03.124018  291579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:03.124029  291579 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:03.124112  291579 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-991316"
	I1216 03:06:03.124138  291579 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-991316"
	I1216 03:06:03.124181  291579 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:03.124207  291579 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:03.124142  291579 addons.go:70] Setting default-storageclass=true in profile "newest-cni-991316"
	I1216 03:06:03.124254  291579 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-991316"
	I1216 03:06:03.124584  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:03.124755  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:03.129411  291579 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:03.130973  291579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:03.148669  291579 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:03.149300  291579 addons.go:239] Setting addon default-storageclass=true in "newest-cni-991316"
	I1216 03:06:03.149348  291579 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:03.149845  291579 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:03.150116  291579 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:03.150140  291579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:03.150217  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:03.178088  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:03.180607  291579 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:03.180634  291579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:03.180706  291579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:03.204600  291579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:03.226302  291579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:03.271472  291579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:03.300537  291579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:03.318771  291579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:03.392993  291579 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:03.394312  291579 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:03.394374  291579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:03.592585  291579 api_server.go:72] duration metric: took 468.555575ms to wait for apiserver process to appear ...
	I1216 03:06:03.592609  291579 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:03.592630  291579 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:03.599813  291579 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 03:06:03.600383  291579 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:03.600670  291579 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 03:06:03.600691  291579 api_server.go:131] duration metric: took 8.074867ms to wait for apiserver health ...
	I1216 03:06:03.600708  291579 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:03.602237  291579 addons.go:530] duration metric: took 478.20989ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:03.606308  291579 system_pods.go:59] 9 kube-system pods found
	I1216 03:06:03.606337  291579 system_pods.go:61] "coredns-7d764666f9-86ggg" [7d507301-7465-4008-a336-b3ccdf6ac711] Pending
	I1216 03:06:03.606349  291579 system_pods.go:61] "coredns-7d764666f9-ss9mb" [2b1e3d81-6f85-4690-86f6-47b5e7665cbd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:03.606355  291579 system_pods.go:61] "etcd-newest-cni-991316" [628355b8-6876-4153-97e8-294f83717eaf] Running
	I1216 03:06:03.606366  291579 system_pods.go:61] "kindnet-7vnx2" [693caa56-221c-4967-b459-24c95a6f228b] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 03:06:03.606375  291579 system_pods.go:61] "kube-apiserver-newest-cni-991316" [80fa29df-b694-4669-a80b-e62f176662a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:06:03.606380  291579 system_pods.go:61] "kube-controller-manager-newest-cni-991316" [6cff15c4-01ea-444f-8e42-d10e73a10abf] Running
	I1216 03:06:03.606394  291579 system_pods.go:61] "kube-proxy-k55dg" [3dcf431e-16a0-4327-b437-ad2b0b7cbea0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:06:03.606404  291579 system_pods.go:61] "kube-scheduler-newest-cni-991316" [17447c80-9e25-41d6-844f-3714404a2404] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:06:03.606417  291579 system_pods.go:61] "storage-provisioner" [b2aa6962-6de7-4fb0-914b-43e726858087] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:03.606440  291579 system_pods.go:74] duration metric: took 5.707173ms to wait for pod list to return data ...
	I1216 03:06:03.606456  291579 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:03.610030  291579 default_sa.go:45] found service account: "default"
	I1216 03:06:03.610055  291579 default_sa.go:55] duration metric: took 3.592911ms for default service account to be created ...
	I1216 03:06:03.610070  291579 kubeadm.go:587] duration metric: took 486.044001ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:06:03.610088  291579 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:03.613525  291579 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:03.613567  291579 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:03.613586  291579 node_conditions.go:105] duration metric: took 3.491948ms to run NodePressure ...
	I1216 03:06:03.613605  291579 start.go:242] waiting for startup goroutines ...
	I1216 03:06:03.896945  291579 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-991316" context rescaled to 1 replicas
	I1216 03:06:03.896993  291579 start.go:247] waiting for cluster config update ...
	I1216 03:06:03.897008  291579 start.go:256] writing updated cluster config ...
	I1216 03:06:03.897330  291579 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:03.949648  291579 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:06:03.952253  291579 out.go:179] * Done! kubectl is now configured to use "newest-cni-991316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.81434508Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k55dg/POD" id=1d65c511-1954-457e-8a7e-c8f6a10f4e6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.814432006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.816116264Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.816685219Z" level=info msg="Ran pod sandbox 1a6965c42237337db658d8f290987842ef6dd37a657f66971bf6820efedd5b04 with infra container: kube-system/kindnet-7vnx2/POD" id=a3157291-6d38-418a-9025-b28cf72c88eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.817044802Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1d65c511-1954-457e-8a7e-c8f6a10f4e6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.817913867Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8a2e7304-9d31-4369-827b-00a677aaa4f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.818598349Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.818905491Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b78ffb05-0f11-4778-af6f-71d7d9bd036c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.819573721Z" level=info msg="Ran pod sandbox 021982dd1346ce4e26b376ac40cf70087a7b82928112387abc59893cdf6c36a6 with infra container: kube-system/kube-proxy-k55dg/POD" id=1d65c511-1954-457e-8a7e-c8f6a10f4e6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.821052642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1bd3d87b-f9e9-4a6c-a3f7-4ee83ddf633a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.822292406Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c2b5a94a-a10a-4a62-8aa3-ba4ebfb06929 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.825072148Z" level=info msg="Creating container: kube-system/kindnet-7vnx2/kindnet-cni" id=16334b7d-afd8-4cc7-ace8-aae8063a9ca8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.82517777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.826712098Z" level=info msg="Creating container: kube-system/kube-proxy-k55dg/kube-proxy" id=fafd4ef0-8a4a-468c-b111-830d71ec9edb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.82686065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.829754758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.830216104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.833657407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.834075347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.865380852Z" level=info msg="Created container 21996b537407c9525cb95dc0e57836d816f7ce2f88a36b145f6e9079b9b2cd90: kube-system/kindnet-7vnx2/kindnet-cni" id=16334b7d-afd8-4cc7-ace8-aae8063a9ca8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.866097121Z" level=info msg="Starting container: 21996b537407c9525cb95dc0e57836d816f7ce2f88a36b145f6e9079b9b2cd90" id=c66fd416-7ecd-4997-800f-99d54e5af118 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.867766028Z" level=info msg="Started container" PID=1595 containerID=21996b537407c9525cb95dc0e57836d816f7ce2f88a36b145f6e9079b9b2cd90 description=kube-system/kindnet-7vnx2/kindnet-cni id=c66fd416-7ecd-4997-800f-99d54e5af118 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a6965c42237337db658d8f290987842ef6dd37a657f66971bf6820efedd5b04
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.868869738Z" level=info msg="Created container 0e841ff13908fd6cbf55057dcbe8659d9414388d3c9135406f44bfdb5265f74a: kube-system/kube-proxy-k55dg/kube-proxy" id=fafd4ef0-8a4a-468c-b111-830d71ec9edb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.869427976Z" level=info msg="Starting container: 0e841ff13908fd6cbf55057dcbe8659d9414388d3c9135406f44bfdb5265f74a" id=0d230c9b-d2d6-48ad-817a-76c50ca20a2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:03 newest-cni-991316 crio[781]: time="2025-12-16T03:06:03.872326733Z" level=info msg="Started container" PID=1596 containerID=0e841ff13908fd6cbf55057dcbe8659d9414388d3c9135406f44bfdb5265f74a description=kube-system/kube-proxy-k55dg/kube-proxy id=0d230c9b-d2d6-48ad-817a-76c50ca20a2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=021982dd1346ce4e26b376ac40cf70087a7b82928112387abc59893cdf6c36a6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0e841ff13908f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   021982dd1346c       kube-proxy-k55dg                            kube-system
	21996b537407c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   1a6965c422373       kindnet-7vnx2                               kube-system
	0783cfc0deca9       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   a549ad635c9c3       kube-controller-manager-newest-cni-991316   kube-system
	8b2cce1a4975d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   519dddceea34c       etcd-newest-cni-991316                      kube-system
	2f9a1eac25ed3       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   5546c359d6598       kube-apiserver-newest-cni-991316            kube-system
	703818fa24a55       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   7cc6616a4e67b       kube-scheduler-newest-cni-991316            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-991316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-991316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=newest-cni-991316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-991316
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:58 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:58 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:58 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 03:05:58 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-991316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                58335f55-1f55-4122-b10c-c1f511a1797b
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-991316                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-7vnx2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-991316             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-991316    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-k55dg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-991316             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-991316 event: Registered Node newest-cni-991316 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [8b2cce1a4975d12527bf4039b6f7508838fe69bb90d143b6155540a6632cd66c] <==
	{"level":"warn","ts":"2025-12-16T03:05:54.867048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.874410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.883625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.891582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.898168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.904985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.911478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.917746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.923958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.932248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.939977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.947594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.953806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.960387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.973395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.980048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.986408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.992605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:54.999074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.011674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.024431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.031338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.037880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.044980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:55.088975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38250","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:06:05 up 48 min,  0 user,  load average: 3.93, 2.98, 1.95
	Linux newest-cni-991316 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [21996b537407c9525cb95dc0e57836d816f7ce2f88a36b145f6e9079b9b2cd90] <==
	I1216 03:06:04.108626       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:04.108906       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 03:06:04.109020       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:04.109043       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:04.109079       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:04.309261       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:04.309293       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:04.309307       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:04.309464       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:04.609519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:04.609549       1 metrics.go:72] Registering metrics
	I1216 03:06:04.609617       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [2f9a1eac25ed331b85b1815857c48f2092a69544118206d8c89c245f4b589f0e] <==
	I1216 03:05:55.579461       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:05:55.579484       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:05:55.579530       1 aggregator.go:187] initial CRD sync complete...
	I1216 03:05:55.579542       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:05:55.579550       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:55.579556       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:55.591347       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:05:55.752754       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:56.462883       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1216 03:05:56.466553       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1216 03:05:56.466571       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:05:56.913429       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:56.948538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:57.065151       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 03:05:57.070697       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1216 03:05:57.071960       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:05:57.077026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:05:57.487125       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:05:58.138507       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:05:58.148738       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 03:05:58.155008       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:06:02.943311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:02.947023       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:03.038404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:06:03.488695       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0783cfc0deca900ce52ae68e79555c34d02731b9ef01105c839847352003c259] <==
	I1216 03:06:02.293685       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.293728       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.293580       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.293854       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.293864       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:06:02.293974       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.294100       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-991316"
	I1216 03:06:02.294206       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 03:06:02.293614       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.294677       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.294712       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.294945       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295314       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295362       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295381       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295401       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295457       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.295680       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.300351       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.301648       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:02.306315       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-991316" podCIDRs=["10.42.0.0/24"]
	I1216 03:06:02.392522       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:02.392541       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:06:02.392546       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:06:02.402083       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0e841ff13908fd6cbf55057dcbe8659d9414388d3c9135406f44bfdb5265f74a] <==
	I1216 03:06:03.910074       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:03.986523       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:04.087554       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:04.087608       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 03:06:04.087695       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:04.107544       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:04.107605       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:06:04.113412       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:04.113788       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:06:04.113811       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:04.115931       1 config.go:200] "Starting service config controller"
	I1216 03:06:04.115955       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:04.115987       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:04.115994       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:04.116008       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:04.116026       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:04.116219       1 config.go:309] "Starting node config controller"
	I1216 03:06:04.116236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:04.116244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:04.216111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:06:04.216125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:06:04.216338       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [703818fa24a55b3f26a72e4972274f52122640733757b21f4328d8236aa7b3ff] <==
	E1216 03:05:55.512606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1216 03:05:55.512701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1216 03:05:55.512683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1216 03:05:55.512758       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 03:05:55.512855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1216 03:05:55.513151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1216 03:05:56.417044       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1216 03:05:56.418111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1216 03:05:56.470421       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1216 03:05:56.471439       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1216 03:05:56.484662       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1216 03:05:56.485669       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1216 03:05:56.550752       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1216 03:05:56.551831       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 03:05:56.554764       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1216 03:05:56.555549       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1216 03:05:56.565678       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 03:05:56.566589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1216 03:05:56.586725       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1216 03:05:56.587611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1216 03:05:56.613988       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1216 03:05:56.614980       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1216 03:05:56.642962       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1216 03:05:56.643947       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1216 03:05:59.104811       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:05:58 newest-cni-991316 kubelet[1311]: E1216 03:05:58.985416    1311 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-991316\" already exists" pod="kube-system/kube-scheduler-newest-cni-991316"
	Dec 16 03:05:58 newest-cni-991316 kubelet[1311]: E1216 03:05:58.985462    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-991316" containerName="kube-controller-manager"
	Dec 16 03:05:58 newest-cni-991316 kubelet[1311]: E1216 03:05:58.985496    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:05:59 newest-cni-991316 kubelet[1311]: E1216 03:05:59.973108    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-991316" containerName="kube-controller-manager"
	Dec 16 03:05:59 newest-cni-991316 kubelet[1311]: E1216 03:05:59.973193    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:05:59 newest-cni-991316 kubelet[1311]: E1216 03:05:59.973247    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-991316" containerName="kube-apiserver"
	Dec 16 03:05:59 newest-cni-991316 kubelet[1311]: E1216 03:05:59.973383    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-991316" containerName="etcd"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: I1216 03:06:00.006112    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-991316" podStartSLOduration=4.006092205 podStartE2EDuration="4.006092205s" podCreationTimestamp="2025-12-16 03:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:05:59.996678987 +0000 UTC m=+2.124108466" watchObservedRunningTime="2025-12-16 03:06:00.006092205 +0000 UTC m=+2.133521684"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: I1216 03:06:00.007245    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-991316" podStartSLOduration=2.007228338 podStartE2EDuration="2.007228338s" podCreationTimestamp="2025-12-16 03:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:00.005774516 +0000 UTC m=+2.133203996" watchObservedRunningTime="2025-12-16 03:06:00.007228338 +0000 UTC m=+2.134657799"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: I1216 03:06:00.021559    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-991316" podStartSLOduration=2.021536803 podStartE2EDuration="2.021536803s" podCreationTimestamp="2025-12-16 03:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:00.021470005 +0000 UTC m=+2.148899485" watchObservedRunningTime="2025-12-16 03:06:00.021536803 +0000 UTC m=+2.148966282"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: I1216 03:06:00.031497    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-991316" podStartSLOduration=2.031477746 podStartE2EDuration="2.031477746s" podCreationTimestamp="2025-12-16 03:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:00.031412913 +0000 UTC m=+2.158842393" watchObservedRunningTime="2025-12-16 03:06:00.031477746 +0000 UTC m=+2.158907224"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: E1216 03:06:00.974247    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:06:00 newest-cni-991316 kubelet[1311]: E1216 03:06:00.974475    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-991316" containerName="kube-apiserver"
	Dec 16 03:06:02 newest-cni-991316 kubelet[1311]: I1216 03:06:02.396152    1311 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 16 03:06:02 newest-cni-991316 kubelet[1311]: I1216 03:06:02.396934    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: E1216 03:06:03.185307    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.581878    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-cni-cfg\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.581915    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-lib-modules\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.581944    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-xtables-lock\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.581965    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24sk\" (UniqueName: \"kubernetes.io/projected/693caa56-221c-4967-b459-24c95a6f228b-kube-api-access-w24sk\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.581991    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-kube-proxy\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.582011    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-xtables-lock\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.582123    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-lib-modules\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.582231    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75q4\" (UniqueName: \"kubernetes.io/projected/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-kube-api-access-g75q4\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:03 newest-cni-991316 kubelet[1311]: I1216 03:06:03.996479    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7vnx2" podStartSLOduration=0.996459788 podStartE2EDuration="996.459788ms" podCreationTimestamp="2025-12-16 03:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:03.99604868 +0000 UTC m=+6.123478159" watchObservedRunningTime="2025-12-16 03:06:03.996459788 +0000 UTC m=+6.123889268"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-991316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-86ggg storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner: exit status 1 (77.716884ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-86ggg" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-073001 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-073001 --alsologtostderr -v=1: exit status 80 (2.467223706s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-073001 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:06:05.153986  296360 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:05.154095  296360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.154106  296360 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:05.154113  296360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.154436  296360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:05.154751  296360 out.go:368] Setting JSON to false
	I1216 03:06:05.154774  296360 mustload.go:66] Loading cluster: old-k8s-version-073001
	I1216 03:06:05.155216  296360 config.go:182] Loaded profile config "old-k8s-version-073001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 03:06:05.155623  296360 cli_runner.go:164] Run: docker container inspect old-k8s-version-073001 --format={{.State.Status}}
	I1216 03:06:05.175921  296360 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:06:05.176263  296360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.235342  296360 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-16 03:06:05.224978265 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.235919  296360 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-073001 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 03:06:05.237730  296360 out.go:179] * Pausing node old-k8s-version-073001 ... 
	I1216 03:06:05.239176  296360 host.go:66] Checking if "old-k8s-version-073001" exists ...
	I1216 03:06:05.239532  296360 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:05.239580  296360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-073001
	I1216 03:06:05.262767  296360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/old-k8s-version-073001/id_rsa Username:docker}
	I1216 03:06:05.369014  296360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:05.384072  296360 pause.go:52] kubelet running: true
	I1216 03:06:05.384131  296360 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:05.572720  296360 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:05.572867  296360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:05.657948  296360 cri.go:89] found id: "d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e"
	I1216 03:06:05.657974  296360 cri.go:89] found id: "9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272"
	I1216 03:06:05.657980  296360 cri.go:89] found id: "a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537"
	I1216 03:06:05.657986  296360 cri.go:89] found id: "83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734"
	I1216 03:06:05.657991  296360 cri.go:89] found id: "05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	I1216 03:06:05.657998  296360 cri.go:89] found id: "0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163"
	I1216 03:06:05.658003  296360 cri.go:89] found id: "0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0"
	I1216 03:06:05.658008  296360 cri.go:89] found id: "4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d"
	I1216 03:06:05.658013  296360 cri.go:89] found id: "ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d"
	I1216 03:06:05.658028  296360 cri.go:89] found id: "0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	I1216 03:06:05.658037  296360 cri.go:89] found id: "7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69"
	I1216 03:06:05.658042  296360 cri.go:89] found id: ""
	I1216 03:06:05.658093  296360 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:05.671108  296360 retry.go:31] will retry after 310.477181ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:05Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:05.982618  296360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:05.998008  296360 pause.go:52] kubelet running: false
	I1216 03:06:05.998066  296360 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:06.187793  296360 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:06.187909  296360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:06.268862  296360 cri.go:89] found id: "d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e"
	I1216 03:06:06.268885  296360 cri.go:89] found id: "9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272"
	I1216 03:06:06.268891  296360 cri.go:89] found id: "a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537"
	I1216 03:06:06.268896  296360 cri.go:89] found id: "83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734"
	I1216 03:06:06.268900  296360 cri.go:89] found id: "05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	I1216 03:06:06.268906  296360 cri.go:89] found id: "0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163"
	I1216 03:06:06.268910  296360 cri.go:89] found id: "0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0"
	I1216 03:06:06.268913  296360 cri.go:89] found id: "4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d"
	I1216 03:06:06.268916  296360 cri.go:89] found id: "ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d"
	I1216 03:06:06.268924  296360 cri.go:89] found id: "0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	I1216 03:06:06.268929  296360 cri.go:89] found id: "7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69"
	I1216 03:06:06.268933  296360 cri.go:89] found id: ""
	I1216 03:06:06.268977  296360 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:06.282953  296360 retry.go:31] will retry after 491.320725ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:06Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:06.774639  296360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:06.787698  296360 pause.go:52] kubelet running: false
	I1216 03:06:06.787749  296360 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:06.918957  296360 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:06.919038  296360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:06.985815  296360 cri.go:89] found id: "d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e"
	I1216 03:06:06.985855  296360 cri.go:89] found id: "9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272"
	I1216 03:06:06.985860  296360 cri.go:89] found id: "a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537"
	I1216 03:06:06.985865  296360 cri.go:89] found id: "83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734"
	I1216 03:06:06.985870  296360 cri.go:89] found id: "05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	I1216 03:06:06.985883  296360 cri.go:89] found id: "0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163"
	I1216 03:06:06.985887  296360 cri.go:89] found id: "0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0"
	I1216 03:06:06.985890  296360 cri.go:89] found id: "4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d"
	I1216 03:06:06.985892  296360 cri.go:89] found id: "ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d"
	I1216 03:06:06.985898  296360 cri.go:89] found id: "0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	I1216 03:06:06.985901  296360 cri.go:89] found id: "7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69"
	I1216 03:06:06.985904  296360 cri.go:89] found id: ""
	I1216 03:06:06.985940  296360 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:06.997596  296360 retry.go:31] will retry after 314.010474ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:06Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:07.312174  296360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:07.326338  296360 pause.go:52] kubelet running: false
	I1216 03:06:07.326405  296360 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:07.461112  296360 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:07.461211  296360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:07.533992  296360 cri.go:89] found id: "d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e"
	I1216 03:06:07.534015  296360 cri.go:89] found id: "9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272"
	I1216 03:06:07.534021  296360 cri.go:89] found id: "a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537"
	I1216 03:06:07.534026  296360 cri.go:89] found id: "83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734"
	I1216 03:06:07.534030  296360 cri.go:89] found id: "05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	I1216 03:06:07.534035  296360 cri.go:89] found id: "0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163"
	I1216 03:06:07.534040  296360 cri.go:89] found id: "0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0"
	I1216 03:06:07.534045  296360 cri.go:89] found id: "4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d"
	I1216 03:06:07.534050  296360 cri.go:89] found id: "ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d"
	I1216 03:06:07.534076  296360 cri.go:89] found id: "0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	I1216 03:06:07.534086  296360 cri.go:89] found id: "7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69"
	I1216 03:06:07.534090  296360 cri.go:89] found id: ""
	I1216 03:06:07.534133  296360 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:07.548957  296360 out.go:203] 
	W1216 03:06:07.550188  296360 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 03:06:07.550210  296360 out.go:285] * 
	* 
	W1216 03:06:07.555220  296360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:06:07.556504  296360 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-073001 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-073001
helpers_test.go:244: (dbg) docker inspect old-k8s-version-073001:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	        "Created": "2025-12-16T03:03:54.698671723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:07.223268529Z",
	            "FinishedAt": "2025-12-16T03:05:06.325639261Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d-json.log",
	        "Name": "/old-k8s-version-073001",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-073001:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-073001",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	                "LowerDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-073001",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-073001/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-073001",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d0a875f9a9e2c8222eadb9a06e5d6af6054a0095b855f026051ae5f9ba00f5d8",
	            "SandboxKey": "/var/run/docker/netns/d0a875f9a9e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-073001": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5dccd8a47ad3460508b5e229ec860f06a2e52bc9489d8882cbbf26ed9824ada8",
	                    "EndpointID": "5e54896ac97cede9fbfa5df1811b07e87248ae180e781d61df34c4ab6778be7b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5a:e3:8f:ff:f0:83",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-073001",
	                        "76d012974e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001: exit status 2 (358.672222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25: (1.051135077s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                              │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                               │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                            │ running-upgrade-146373       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-307185 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:05.682487  296715 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:05.682625  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682640  296715 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:05.682647  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682892  296715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:05.683360  296715 out.go:368] Setting JSON to false
	I1216 03:06:05.684657  296715 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2918,"bootTime":1765851448,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:05.684718  296715 start.go:143] virtualization: kvm guest
	I1216 03:06:05.686760  296715 out.go:179] * [default-k8s-diff-port-079165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:05.688136  296715 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:05.688131  296715 notify.go:221] Checking for updates...
	I1216 03:06:05.689857  296715 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:05.691168  296715 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:05.692289  296715 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:05.693461  296715 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:05.694667  296715 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:05.696383  296715 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:05.697017  296715 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:05.726936  296715 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:05.727106  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.790275  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:06:05.779307926 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.790398  296715 docker.go:319] overlay module found
	I1216 03:06:05.796242  296715 out.go:179] * Using the docker driver based on existing profile
	I1216 03:06:05.797483  296715 start.go:309] selected driver: docker
	I1216 03:06:05.797502  296715 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.797606  296715 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:05.798303  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.854332  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 03:06:05.844339288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.854654  296715 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:05.854685  296715 cni.go:84] Creating CNI manager for ""
	I1216 03:06:05.854753  296715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:05.854799  296715 start.go:353] cluster config:
	{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.856642  296715 out.go:179] * Starting "default-k8s-diff-port-079165" primary control-plane node in "default-k8s-diff-port-079165" cluster
	I1216 03:06:05.857706  296715 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:05.858792  296715 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:05.859728  296715 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:05.859769  296715 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:05.859782  296715 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:05.859813  296715 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:05.859930  296715 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:05.859945  296715 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:05.860041  296715 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/config.json ...
	I1216 03:06:05.880496  296715 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:05.880518  296715 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:05.880533  296715 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:05.880557  296715 start.go:360] acquireMachinesLock for default-k8s-diff-port-079165: {Name:mk0419493342481d6bebce452e91be7e944f2c45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:05.880618  296715 start.go:364] duration metric: took 43.763µs to acquireMachinesLock for "default-k8s-diff-port-079165"
	I1216 03:06:05.880635  296715 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:06:05.880641  296715 fix.go:54] fixHost starting: 
	I1216 03:06:05.880894  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:05.899360  296715 fix.go:112] recreateIfNeeded on default-k8s-diff-port-079165: state=Stopped err=<nil>
	W1216 03:06:05.899392  296715 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 16 03:05:36 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:36.793010292Z" level=info msg="Started container" PID=1737 containerID=ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper id=27cdce48-c9f6-4e44-a41b-f46f0dee2dc5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d51c1cdfe2e232145bb2f8f57fb5ee079e44b2476dc48697e0d05ca1501ac0b4
	Dec 16 03:05:37 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:37.757961473Z" level=info msg="Removing container: dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c" id=3273d1c2-165e-460a-b301-fa5ead23e91e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:37 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:37.766704108Z" level=info msg="Removed container dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=3273d1c2-165e-460a-b301-fa5ead23e91e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.783313837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13ebcb10-bae4-4e2e-b48e-0f46c0282fdb name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.784236662Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=07860cb9-778e-4e5f-977b-ef71041633a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.785373945Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b6e0e3c7-6a03-4df9-bfd0-1ea3f799616d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.78552019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.790798206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.79100658Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b362d658e4ac5f33b1afd8ec26bc58e963f41e907bf32e19539a223cb6f8d159/merged/etc/passwd: no such file or directory"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.791040539Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b362d658e4ac5f33b1afd8ec26bc58e963f41e907bf32e19539a223cb6f8d159/merged/etc/group: no such file or directory"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.791348086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.824775989Z" level=info msg="Created container d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e: kube-system/storage-provisioner/storage-provisioner" id=b6e0e3c7-6a03-4df9-bfd0-1ea3f799616d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.825619907Z" level=info msg="Starting container: d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e" id=d8838813-d84f-4242-af80-f4e609dc370a name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.827842233Z" level=info msg="Started container" PID=1752 containerID=d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e description=kube-system/storage-provisioner/storage-provisioner id=d8838813-d84f-4242-af80-f4e609dc370a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4316b098556d3fd9148518946cf8f288f6c5a72cb351ad21e375016cbd777f8
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.637350718Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd62acd-8fdd-45b6-b723-0ef339a9c66f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.638321566Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b126d45-2448-4d8e-beb1-756a16ff9414 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.639454663Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=32352df7-db62-4e70-a000-fe7e738b27e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.639567024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.646355556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.646867674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.680955845Z" level=info msg="Created container 0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=32352df7-db62-4e70-a000-fe7e738b27e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.681590421Z" level=info msg="Starting container: 0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5" id=f5120218-a5f3-484d-9b7d-6a80ec0c1125 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.683292763Z" level=info msg="Started container" PID=1771 containerID=0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper id=f5120218-a5f3-484d-9b7d-6a80ec0c1125 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d51c1cdfe2e232145bb2f8f57fb5ee079e44b2476dc48697e0d05ca1501ac0b4
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.799324957Z" level=info msg="Removing container: ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076" id=5ff2d177-1991-4d7f-ab6d-62b94e6373c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.808789744Z" level=info msg="Removed container ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=5ff2d177-1991-4d7f-ab6d-62b94e6373c4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0e993ddb5f46a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   d51c1cdfe2e23       dashboard-metrics-scraper-5f989dc9cf-zjk95       kubernetes-dashboard
	d8e091e1498cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   d4316b098556d       storage-provisioner                              kube-system
	7f38f2197a346       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   762ff7ac117b4       kubernetes-dashboard-8694d4445c-qgkcx            kubernetes-dashboard
	9cdeb711ba8cb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   e68fe03b02bb3       coredns-5dd5756b68-8lk58                         kube-system
	a3cbf298075ea       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   0bc238a0bffa7       kube-proxy-mhxd9                                 kube-system
	1339bc9f647e3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   fe003c6e4bff2       busybox                                          default
	83b2b7f69ee86       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   7d096eafbf2b7       kindnet-8qgxg                                    kube-system
	05acd28c1daf2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   d4316b098556d       storage-provisioner                              kube-system
	0606b7fb4f398       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   b8c09f604b122       kube-apiserver-old-k8s-version-073001            kube-system
	0a70e4e6115e7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   4dcfbbf8cec31       etcd-old-k8s-version-073001                      kube-system
	4a7db7caad9c2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   bd2c818d02478       kube-scheduler-old-k8s-version-073001            kube-system
	ad9aca2d6ec11       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   8cfcd916c05be       kube-controller-manager-old-k8s-version-073001   kube-system
	
	
	==> coredns [9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34013 - 26098 "HINFO IN 4904274672574350392.1392974415923756023. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022798529s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-073001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-073001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=old-k8s-version-073001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-073001
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-073001
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                4d9d8feb-d0ea-4431-92a9-9a047ec2b103
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-8lk58                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-073001                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-8qgxg                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-073001             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-073001    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-mhxd9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-073001             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zjk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qgkcx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node old-k8s-version-073001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-073001 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0] <==
	{"level":"info","ts":"2025-12-16T03:05:14.210689Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T03:05:14.210725Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-16T03:05:14.211797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-16T03:05:14.212579Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-16T03:05:14.212854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:05:14.212932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:05:14.214685Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T03:05:14.21497Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T03:05:14.215003Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T03:05:14.215098Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:05:14.215109Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:05:15.401552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.40161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.401648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.401664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.403128Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-073001 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T03:05:15.403152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:05:15.403126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:05:15.403692Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T03:05:15.403768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T03:05:15.4054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-16T03:05:15.405415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:06:08 up 48 min,  0 user,  load average: 3.93, 2.98, 1.95
	Linux old-k8s-version-073001 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734] <==
	I1216 03:05:17.212190       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:05:17.290538       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:05:17.291034       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:05:17.291117       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:05:17.291163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:05:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:05:17.494257       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:05:17.494335       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:05:17.494349       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:05:17.505004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:05:17.990407       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:05:17.990443       1 metrics.go:72] Registering metrics
	I1216 03:05:17.990689       1 controller.go:711] "Syncing nftables rules"
	I1216 03:05:27.494655       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:27.494703       1 main.go:301] handling current node
	I1216 03:05:37.494569       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:37.494617       1 main.go:301] handling current node
	I1216 03:05:47.494298       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:47.494357       1 main.go:301] handling current node
	I1216 03:05:57.494337       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:57.494388       1 main.go:301] handling current node
	I1216 03:06:07.500933       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:06:07.500979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163] <==
	I1216 03:05:16.537116       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1216 03:05:16.591195       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:16.638959       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1216 03:05:16.639013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:05:16.639186       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 03:05:16.639276       1 aggregator.go:166] initial CRD sync complete...
	I1216 03:05:16.639318       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 03:05:16.639394       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:16.639409       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:16.640038       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 03:05:16.639331       1 shared_informer.go:318] Caches are synced for configmaps
	I1216 03:05:16.640661       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1216 03:05:16.640727       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1216 03:05:16.670256       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1216 03:05:17.541020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:05:17.692784       1 controller.go:624] quota admission added evaluator for: namespaces
	I1216 03:05:17.762308       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 03:05:17.787291       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:17.795676       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:17.807420       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 03:05:17.852658       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.76.87"}
	I1216 03:05:17.871989       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.37.77"}
	I1216 03:05:29.549777       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 03:05:29.550961       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1216 03:05:29.575294       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d] <==
	I1216 03:05:29.602113       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1216 03:05:29.602247       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1216 03:05:29.602257       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1216 03:05:29.602849       1 shared_informer.go:318] Caches are synced for persistent volume
	I1216 03:05:29.603379       1 shared_informer.go:318] Caches are synced for expand
	I1216 03:05:29.605577       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1216 03:05:29.607800       1 shared_informer.go:318] Caches are synced for crt configmap
	I1216 03:05:29.617529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.615996ms"
	I1216 03:05:29.617622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.728µs"
	I1216 03:05:29.687351       1 shared_informer.go:318] Caches are synced for disruption
	I1216 03:05:29.726471       1 shared_informer.go:318] Caches are synced for resource quota
	I1216 03:05:29.798998       1 shared_informer.go:318] Caches are synced for HPA
	I1216 03:05:29.810592       1 shared_informer.go:318] Caches are synced for resource quota
	I1216 03:05:30.138046       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:05:30.138084       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 03:05:30.138343       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:05:33.778325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.599462ms"
	I1216 03:05:33.778440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.388µs"
	I1216 03:05:36.761059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.051µs"
	I1216 03:05:37.768384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.646µs"
	I1216 03:05:38.773988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.46µs"
	I1216 03:05:51.808987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.296942ms"
	I1216 03:05:51.809082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.268µs"
	I1216 03:05:52.809528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.257µs"
	I1216 03:05:59.897794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.14µs"
	
	
	==> kube-proxy [a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537] <==
	I1216 03:05:17.093942       1 server_others.go:69] "Using iptables proxy"
	I1216 03:05:17.103289       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1216 03:05:17.121894       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:05:17.124297       1 server_others.go:152] "Using iptables Proxier"
	I1216 03:05:17.124337       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 03:05:17.124348       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 03:05:17.124384       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 03:05:17.124704       1 server.go:846] "Version info" version="v1.28.0"
	I1216 03:05:17.124724       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:17.125394       1 config.go:188] "Starting service config controller"
	I1216 03:05:17.125432       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 03:05:17.125472       1 config.go:97] "Starting endpoint slice config controller"
	I1216 03:05:17.125916       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 03:05:17.125477       1 config.go:315] "Starting node config controller"
	I1216 03:05:17.126435       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 03:05:17.228789       1 shared_informer.go:318] Caches are synced for service config
	I1216 03:05:17.228911       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1216 03:05:17.230098       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d] <==
	I1216 03:05:14.708267       1 serving.go:348] Generated self-signed cert in-memory
	W1216 03:05:16.578126       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:05:16.578279       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:05:16.578318       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:05:16.578373       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:05:16.613400       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1216 03:05:16.613433       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:16.616009       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:05:16.616053       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 03:05:16.617176       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 03:05:16.617273       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1216 03:05:16.716216       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.585993     732 topology_manager.go:215] "Topology Admit Handler" podUID="0a9a2afa-30fa-49b2-83d7-e08d89a57451" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658041     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64sgv\" (UniqueName: \"kubernetes.io/projected/0a9a2afa-30fa-49b2-83d7-e08d89a57451-kube-api-access-64sgv\") pod \"kubernetes-dashboard-8694d4445c-qgkcx\" (UID: \"0a9a2afa-30fa-49b2-83d7-e08d89a57451\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658117     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a9a2afa-30fa-49b2-83d7-e08d89a57451-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qgkcx\" (UID: \"0a9a2afa-30fa-49b2-83d7-e08d89a57451\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658245     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e789c6b4-44ac-4456-aa0d-de06bd341690-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zjk95\" (UID: \"e789c6b4-44ac-4456-aa0d-de06bd341690\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658296     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7tm\" (UniqueName: \"kubernetes.io/projected/e789c6b4-44ac-4456-aa0d-de06bd341690-kube-api-access-wx7tm\") pod \"dashboard-metrics-scraper-5f989dc9cf-zjk95\" (UID: \"e789c6b4-44ac-4456-aa0d-de06bd341690\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95"
	Dec 16 03:05:36 old-k8s-version-073001 kubelet[732]: I1216 03:05:36.750323     732 scope.go:117] "RemoveContainer" containerID="dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c"
	Dec 16 03:05:36 old-k8s-version-073001 kubelet[732]: I1216 03:05:36.761511     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx" podStartSLOduration=4.215218471 podCreationTimestamp="2025-12-16 03:05:29 +0000 UTC" firstStartedPulling="2025-12-16 03:05:29.947604445 +0000 UTC m=+16.406967698" lastFinishedPulling="2025-12-16 03:05:33.493836964 +0000 UTC m=+19.953200219" observedRunningTime="2025-12-16 03:05:33.764635931 +0000 UTC m=+20.223999193" watchObservedRunningTime="2025-12-16 03:05:36.761450992 +0000 UTC m=+23.220814252"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: I1216 03:05:37.756578     732 scope.go:117] "RemoveContainer" containerID="dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: I1216 03:05:37.756752     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: E1216 03:05:37.757151     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:38 old-k8s-version-073001 kubelet[732]: I1216 03:05:38.761278     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:38 old-k8s-version-073001 kubelet[732]: E1216 03:05:38.761567     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:39 old-k8s-version-073001 kubelet[732]: I1216 03:05:39.888175     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:39 old-k8s-version-073001 kubelet[732]: E1216 03:05:39.888556     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:47 old-k8s-version-073001 kubelet[732]: I1216 03:05:47.782792     732 scope.go:117] "RemoveContainer" containerID="05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.636670     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.798114     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.798361     732 scope.go:117] "RemoveContainer" containerID="0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: E1216 03:05:52.798760     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:59 old-k8s-version-073001 kubelet[732]: I1216 03:05:59.888298     732 scope.go:117] "RemoveContainer" containerID="0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	Dec 16 03:05:59 old-k8s-version-073001 kubelet[732]: E1216 03:05:59.888704     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: kubelet.service: Consumed 1.536s CPU time.
	
	
	==> kubernetes-dashboard [7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69] <==
	2025/12/16 03:05:33 Starting overwatch
	2025/12/16 03:05:33 Using namespace: kubernetes-dashboard
	2025/12/16 03:05:33 Using in-cluster config to connect to apiserver
	2025/12/16 03:05:33 Using secret token for csrf signing
	2025/12/16 03:05:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:05:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:05:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/16 03:05:33 Generating JWE encryption key
	2025/12/16 03:05:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:05:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:05:33 Initializing JWE encryption key from synchronized object
	2025/12/16 03:05:33 Creating in-cluster Sidecar client
	2025/12/16 03:05:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:05:33 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd] <==
	I1216 03:05:17.069263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:05:47.072299       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e] <==
	I1216 03:05:47.839992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:05:47.848023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:05:47.848073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 03:06:05.245587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:06:05.247062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e!
	I1216 03:06:05.247449       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55598adf-c5c2-4b9b-a5f6-64fff021d0ce", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e became leader
	I1216 03:06:05.347708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073001 -n old-k8s-version-073001: exit status 2 (322.425652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-073001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-073001
helpers_test.go:244: (dbg) docker inspect old-k8s-version-073001:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	        "Created": "2025-12-16T03:03:54.698671723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:07.223268529Z",
	            "FinishedAt": "2025-12-16T03:05:06.325639261Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d/76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d-json.log",
	        "Name": "/old-k8s-version-073001",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-073001:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-073001",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76d012974e40d6b91c6676442e51851e5eaf5ff7f256b2301a62308c3fd52c6d",
	                "LowerDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08b598672925f47d664ab2f93e3b1c649593f265fba8f94e01556bf83643260f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-073001",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-073001/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-073001",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-073001",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d0a875f9a9e2c8222eadb9a06e5d6af6054a0095b855f026051ae5f9ba00f5d8",
	            "SandboxKey": "/var/run/docker/netns/d0a875f9a9e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-073001": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5dccd8a47ad3460508b5e229ec860f06a2e52bc9489d8882cbbf26ed9824ada8",
	                    "EndpointID": "5e54896ac97cede9fbfa5df1811b07e87248ae180e781d61df34c4ab6778be7b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5a:e3:8f:ff:f0:83",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-073001",
	                        "76d012974e40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001: exit status 2 (329.07176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-073001 logs -n 25: (1.11600553s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p NoKubernetes-027639 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                              │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-027639                                                                                                                                                                                                                               │ NoKubernetes-027639          │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:03 UTC │ 16 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                            │ running-upgrade-146373       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-307185 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:05.682487  296715 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:05.682625  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682640  296715 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:05.682647  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682892  296715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:05.683360  296715 out.go:368] Setting JSON to false
	I1216 03:06:05.684657  296715 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2918,"bootTime":1765851448,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:05.684718  296715 start.go:143] virtualization: kvm guest
	I1216 03:06:05.686760  296715 out.go:179] * [default-k8s-diff-port-079165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:05.688136  296715 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:05.688131  296715 notify.go:221] Checking for updates...
	I1216 03:06:05.689857  296715 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:05.691168  296715 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:05.692289  296715 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:05.693461  296715 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:05.694667  296715 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:05.696383  296715 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:05.697017  296715 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:05.726936  296715 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:05.727106  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.790275  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:06:05.779307926 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.790398  296715 docker.go:319] overlay module found
	I1216 03:06:05.796242  296715 out.go:179] * Using the docker driver based on existing profile
	I1216 03:06:05.797483  296715 start.go:309] selected driver: docker
	I1216 03:06:05.797502  296715 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.797606  296715 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:05.798303  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.854332  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 03:06:05.844339288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.854654  296715 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:05.854685  296715 cni.go:84] Creating CNI manager for ""
	I1216 03:06:05.854753  296715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:05.854799  296715 start.go:353] cluster config:
	{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.856642  296715 out.go:179] * Starting "default-k8s-diff-port-079165" primary control-plane node in "default-k8s-diff-port-079165" cluster
	I1216 03:06:05.857706  296715 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:05.858792  296715 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:05.859728  296715 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:05.859769  296715 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:05.859782  296715 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:05.859813  296715 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:05.859930  296715 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:05.859945  296715 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:05.860041  296715 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/config.json ...
	I1216 03:06:05.880496  296715 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:05.880518  296715 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:05.880533  296715 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:05.880557  296715 start.go:360] acquireMachinesLock for default-k8s-diff-port-079165: {Name:mk0419493342481d6bebce452e91be7e944f2c45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:05.880618  296715 start.go:364] duration metric: took 43.763µs to acquireMachinesLock for "default-k8s-diff-port-079165"
	I1216 03:06:05.880635  296715 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:06:05.880641  296715 fix.go:54] fixHost starting: 
	I1216 03:06:05.880894  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:05.899360  296715 fix.go:112] recreateIfNeeded on default-k8s-diff-port-079165: state=Stopped err=<nil>
	W1216 03:06:05.899392  296715 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 16 03:05:36 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:36.793010292Z" level=info msg="Started container" PID=1737 containerID=ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper id=27cdce48-c9f6-4e44-a41b-f46f0dee2dc5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d51c1cdfe2e232145bb2f8f57fb5ee079e44b2476dc48697e0d05ca1501ac0b4
	Dec 16 03:05:37 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:37.757961473Z" level=info msg="Removing container: dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c" id=3273d1c2-165e-460a-b301-fa5ead23e91e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:37 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:37.766704108Z" level=info msg="Removed container dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=3273d1c2-165e-460a-b301-fa5ead23e91e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.783313837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13ebcb10-bae4-4e2e-b48e-0f46c0282fdb name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.784236662Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=07860cb9-778e-4e5f-977b-ef71041633a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.785373945Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b6e0e3c7-6a03-4df9-bfd0-1ea3f799616d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.78552019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.790798206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.79100658Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b362d658e4ac5f33b1afd8ec26bc58e963f41e907bf32e19539a223cb6f8d159/merged/etc/passwd: no such file or directory"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.791040539Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b362d658e4ac5f33b1afd8ec26bc58e963f41e907bf32e19539a223cb6f8d159/merged/etc/group: no such file or directory"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.791348086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.824775989Z" level=info msg="Created container d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e: kube-system/storage-provisioner/storage-provisioner" id=b6e0e3c7-6a03-4df9-bfd0-1ea3f799616d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.825619907Z" level=info msg="Starting container: d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e" id=d8838813-d84f-4242-af80-f4e609dc370a name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:47 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:47.827842233Z" level=info msg="Started container" PID=1752 containerID=d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e description=kube-system/storage-provisioner/storage-provisioner id=d8838813-d84f-4242-af80-f4e609dc370a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4316b098556d3fd9148518946cf8f288f6c5a72cb351ad21e375016cbd777f8
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.637350718Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd62acd-8fdd-45b6-b723-0ef339a9c66f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.638321566Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b126d45-2448-4d8e-beb1-756a16ff9414 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.639454663Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=32352df7-db62-4e70-a000-fe7e738b27e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.639567024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.646355556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.646867674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.680955845Z" level=info msg="Created container 0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=32352df7-db62-4e70-a000-fe7e738b27e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.681590421Z" level=info msg="Starting container: 0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5" id=f5120218-a5f3-484d-9b7d-6a80ec0c1125 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.683292763Z" level=info msg="Started container" PID=1771 containerID=0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper id=f5120218-a5f3-484d-9b7d-6a80ec0c1125 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d51c1cdfe2e232145bb2f8f57fb5ee079e44b2476dc48697e0d05ca1501ac0b4
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.799324957Z" level=info msg="Removing container: ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076" id=5ff2d177-1991-4d7f-ab6d-62b94e6373c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:52 old-k8s-version-073001 crio[566]: time="2025-12-16T03:05:52.808789744Z" level=info msg="Removed container ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95/dashboard-metrics-scraper" id=5ff2d177-1991-4d7f-ab6d-62b94e6373c4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0e993ddb5f46a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   d51c1cdfe2e23       dashboard-metrics-scraper-5f989dc9cf-zjk95       kubernetes-dashboard
	d8e091e1498cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   d4316b098556d       storage-provisioner                              kube-system
	7f38f2197a346       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   762ff7ac117b4       kubernetes-dashboard-8694d4445c-qgkcx            kubernetes-dashboard
	9cdeb711ba8cb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   e68fe03b02bb3       coredns-5dd5756b68-8lk58                         kube-system
	a3cbf298075ea       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   0bc238a0bffa7       kube-proxy-mhxd9                                 kube-system
	1339bc9f647e3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   fe003c6e4bff2       busybox                                          default
	83b2b7f69ee86       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   7d096eafbf2b7       kindnet-8qgxg                                    kube-system
	05acd28c1daf2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   d4316b098556d       storage-provisioner                              kube-system
	0606b7fb4f398       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   b8c09f604b122       kube-apiserver-old-k8s-version-073001            kube-system
	0a70e4e6115e7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   4dcfbbf8cec31       etcd-old-k8s-version-073001                      kube-system
	4a7db7caad9c2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   bd2c818d02478       kube-scheduler-old-k8s-version-073001            kube-system
	ad9aca2d6ec11       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   8cfcd916c05be       kube-controller-manager-old-k8s-version-073001   kube-system
	
	
	==> coredns [9cdeb711ba8cb21cb6b70ca53a42ba1f3469ae8a8bbdd5224d97bb4f07493272] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34013 - 26098 "HINFO IN 4904274672574350392.1392974415923756023. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022798529s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-073001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-073001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=old-k8s-version-073001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-073001
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:05:47 +0000   Tue, 16 Dec 2025 03:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-073001
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                4d9d8feb-d0ea-4431-92a9-9a047ec2b103
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-8lk58                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-073001                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-8qgxg                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-073001             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-073001    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-mhxd9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-073001             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zjk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qgkcx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-073001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-073001 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-073001 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-073001 event: Registered Node old-k8s-version-073001 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [0a70e4e6115e7fb5fa291c9f5fc168f6b805b31d2c65e3af685c49ba01a902f0] <==
	{"level":"info","ts":"2025-12-16T03:05:14.210689Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T03:05:14.210725Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-16T03:05:14.211797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-16T03:05:14.212579Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-16T03:05:14.212854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:05:14.212932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T03:05:14.214685Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T03:05:14.21497Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T03:05:14.215003Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T03:05:14.215098Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:05:14.215109Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-16T03:05:15.401552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.40161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.401648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-16T03:05:15.401664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.401694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-16T03:05:15.403128Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-073001 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T03:05:15.403152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:05:15.403126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T03:05:15.403692Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T03:05:15.403768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T03:05:15.4054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-16T03:05:15.405415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:06:10 up 48 min,  0 user,  load average: 3.86, 2.98, 1.95
	Linux old-k8s-version-073001 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83b2b7f69ee8694211740fb2d144ea84d2edac5661e32ebb64e18630319e3734] <==
	I1216 03:05:17.212190       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:05:17.290538       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:05:17.291034       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:05:17.291117       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:05:17.291163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:05:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:05:17.494257       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:05:17.494335       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:05:17.494349       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:05:17.505004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:05:17.990407       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:05:17.990443       1 metrics.go:72] Registering metrics
	I1216 03:05:17.990689       1 controller.go:711] "Syncing nftables rules"
	I1216 03:05:27.494655       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:27.494703       1 main.go:301] handling current node
	I1216 03:05:37.494569       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:37.494617       1 main.go:301] handling current node
	I1216 03:05:47.494298       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:47.494357       1 main.go:301] handling current node
	I1216 03:05:57.494337       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:05:57.494388       1 main.go:301] handling current node
	I1216 03:06:07.500933       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:06:07.500979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0606b7fb4f398a35174930e39b2232f673f81cb3addfe09cde5075280b7c7163] <==
	I1216 03:05:16.537116       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1216 03:05:16.591195       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:16.638959       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1216 03:05:16.639013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:05:16.639186       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 03:05:16.639276       1 aggregator.go:166] initial CRD sync complete...
	I1216 03:05:16.639318       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 03:05:16.639394       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:16.639409       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:16.640038       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 03:05:16.639331       1 shared_informer.go:318] Caches are synced for configmaps
	I1216 03:05:16.640661       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1216 03:05:16.640727       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1216 03:05:16.670256       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1216 03:05:17.541020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:05:17.692784       1 controller.go:624] quota admission added evaluator for: namespaces
	I1216 03:05:17.762308       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 03:05:17.787291       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:17.795676       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:17.807420       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 03:05:17.852658       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.76.87"}
	I1216 03:05:17.871989       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.37.77"}
	I1216 03:05:29.549777       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 03:05:29.550961       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1216 03:05:29.575294       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ad9aca2d6ec1198b1eba32ea287c5fb54af4d49b3e1966a31417fcc7f6930a0d] <==
	I1216 03:05:29.602113       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1216 03:05:29.602247       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1216 03:05:29.602257       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1216 03:05:29.602849       1 shared_informer.go:318] Caches are synced for persistent volume
	I1216 03:05:29.603379       1 shared_informer.go:318] Caches are synced for expand
	I1216 03:05:29.605577       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1216 03:05:29.607800       1 shared_informer.go:318] Caches are synced for crt configmap
	I1216 03:05:29.617529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.615996ms"
	I1216 03:05:29.617622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.728µs"
	I1216 03:05:29.687351       1 shared_informer.go:318] Caches are synced for disruption
	I1216 03:05:29.726471       1 shared_informer.go:318] Caches are synced for resource quota
	I1216 03:05:29.798998       1 shared_informer.go:318] Caches are synced for HPA
	I1216 03:05:29.810592       1 shared_informer.go:318] Caches are synced for resource quota
	I1216 03:05:30.138046       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:05:30.138084       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 03:05:30.138343       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 03:05:33.778325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.599462ms"
	I1216 03:05:33.778440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.388µs"
	I1216 03:05:36.761059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.051µs"
	I1216 03:05:37.768384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.646µs"
	I1216 03:05:38.773988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.46µs"
	I1216 03:05:51.808987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.296942ms"
	I1216 03:05:51.809082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.268µs"
	I1216 03:05:52.809528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.257µs"
	I1216 03:05:59.897794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.14µs"
	
	
	==> kube-proxy [a3cbf298075eacbbdf85122557117214f1ba59e4f6064660dc14a333529bf537] <==
	I1216 03:05:17.093942       1 server_others.go:69] "Using iptables proxy"
	I1216 03:05:17.103289       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1216 03:05:17.121894       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:05:17.124297       1 server_others.go:152] "Using iptables Proxier"
	I1216 03:05:17.124337       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 03:05:17.124348       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 03:05:17.124384       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 03:05:17.124704       1 server.go:846] "Version info" version="v1.28.0"
	I1216 03:05:17.124724       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:17.125394       1 config.go:188] "Starting service config controller"
	I1216 03:05:17.125432       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 03:05:17.125472       1 config.go:97] "Starting endpoint slice config controller"
	I1216 03:05:17.125916       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 03:05:17.125477       1 config.go:315] "Starting node config controller"
	I1216 03:05:17.126435       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 03:05:17.228789       1 shared_informer.go:318] Caches are synced for service config
	I1216 03:05:17.228911       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1216 03:05:17.230098       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4a7db7caad9c219a8e3436093739c553d654ae8089c73525ba6d691a792a903d] <==
	I1216 03:05:14.708267       1 serving.go:348] Generated self-signed cert in-memory
	W1216 03:05:16.578126       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:05:16.578279       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:05:16.578318       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:05:16.578373       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:05:16.613400       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1216 03:05:16.613433       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:16.616009       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:05:16.616053       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 03:05:16.617176       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 03:05:16.617273       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1216 03:05:16.716216       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.585993     732 topology_manager.go:215] "Topology Admit Handler" podUID="0a9a2afa-30fa-49b2-83d7-e08d89a57451" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658041     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64sgv\" (UniqueName: \"kubernetes.io/projected/0a9a2afa-30fa-49b2-83d7-e08d89a57451-kube-api-access-64sgv\") pod \"kubernetes-dashboard-8694d4445c-qgkcx\" (UID: \"0a9a2afa-30fa-49b2-83d7-e08d89a57451\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658117     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a9a2afa-30fa-49b2-83d7-e08d89a57451-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qgkcx\" (UID: \"0a9a2afa-30fa-49b2-83d7-e08d89a57451\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658245     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e789c6b4-44ac-4456-aa0d-de06bd341690-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zjk95\" (UID: \"e789c6b4-44ac-4456-aa0d-de06bd341690\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95"
	Dec 16 03:05:29 old-k8s-version-073001 kubelet[732]: I1216 03:05:29.658296     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7tm\" (UniqueName: \"kubernetes.io/projected/e789c6b4-44ac-4456-aa0d-de06bd341690-kube-api-access-wx7tm\") pod \"dashboard-metrics-scraper-5f989dc9cf-zjk95\" (UID: \"e789c6b4-44ac-4456-aa0d-de06bd341690\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95"
	Dec 16 03:05:36 old-k8s-version-073001 kubelet[732]: I1216 03:05:36.750323     732 scope.go:117] "RemoveContainer" containerID="dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c"
	Dec 16 03:05:36 old-k8s-version-073001 kubelet[732]: I1216 03:05:36.761511     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qgkcx" podStartSLOduration=4.215218471 podCreationTimestamp="2025-12-16 03:05:29 +0000 UTC" firstStartedPulling="2025-12-16 03:05:29.947604445 +0000 UTC m=+16.406967698" lastFinishedPulling="2025-12-16 03:05:33.493836964 +0000 UTC m=+19.953200219" observedRunningTime="2025-12-16 03:05:33.764635931 +0000 UTC m=+20.223999193" watchObservedRunningTime="2025-12-16 03:05:36.761450992 +0000 UTC m=+23.220814252"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: I1216 03:05:37.756578     732 scope.go:117] "RemoveContainer" containerID="dcfa18f5c90033ca42f242abda50ef6d558b145190308429dae0cd309c19b47c"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: I1216 03:05:37.756752     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:37 old-k8s-version-073001 kubelet[732]: E1216 03:05:37.757151     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:38 old-k8s-version-073001 kubelet[732]: I1216 03:05:38.761278     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:38 old-k8s-version-073001 kubelet[732]: E1216 03:05:38.761567     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:39 old-k8s-version-073001 kubelet[732]: I1216 03:05:39.888175     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:39 old-k8s-version-073001 kubelet[732]: E1216 03:05:39.888556     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:47 old-k8s-version-073001 kubelet[732]: I1216 03:05:47.782792     732 scope.go:117] "RemoveContainer" containerID="05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.636670     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.798114     732 scope.go:117] "RemoveContainer" containerID="ed4502f718ec116dfce037f8b62aabca325909949122581572a2c26fd5c81076"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: I1216 03:05:52.798361     732 scope.go:117] "RemoveContainer" containerID="0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	Dec 16 03:05:52 old-k8s-version-073001 kubelet[732]: E1216 03:05:52.798760     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:05:59 old-k8s-version-073001 kubelet[732]: I1216 03:05:59.888298     732 scope.go:117] "RemoveContainer" containerID="0e993ddb5f46a1ddd8b5e29b739aa0a572c1288d623023041f58a0042fb504a5"
	Dec 16 03:05:59 old-k8s-version-073001 kubelet[732]: E1216 03:05:59.888704     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zjk95_kubernetes-dashboard(e789c6b4-44ac-4456-aa0d-de06bd341690)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zjk95" podUID="e789c6b4-44ac-4456-aa0d-de06bd341690"
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:06:05 old-k8s-version-073001 systemd[1]: kubelet.service: Consumed 1.536s CPU time.
	
	
	==> kubernetes-dashboard [7f38f2197a3465751552a15105fef3e94e2c90032c3cdb5490f61f90e5bc0e69] <==
	2025/12/16 03:05:33 Starting overwatch
	2025/12/16 03:05:33 Using namespace: kubernetes-dashboard
	2025/12/16 03:05:33 Using in-cluster config to connect to apiserver
	2025/12/16 03:05:33 Using secret token for csrf signing
	2025/12/16 03:05:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:05:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:05:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/16 03:05:33 Generating JWE encryption key
	2025/12/16 03:05:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:05:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:05:33 Initializing JWE encryption key from synchronized object
	2025/12/16 03:05:33 Creating in-cluster Sidecar client
	2025/12/16 03:05:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:05:33 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [05acd28c1daf24f0886741d71b4148b56c032664de391e38ae0d65edf3de5bbd] <==
	I1216 03:05:17.069263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:05:47.072299       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d8e091e1498cb5a65a8df5ff4a2d9a9ddc168941de0891e779dc4a73ce02088e] <==
	I1216 03:05:47.839992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:05:47.848023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:05:47.848073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 03:06:05.245587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:06:05.247062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e!
	I1216 03:06:05.247449       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55598adf-c5c2-4b9b-a5f6-64fff021d0ce", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e became leader
	I1216 03:06:05.347708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-073001_d5734c43-b2af-490c-a543-0269b718b86e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073001 -n old-k8s-version-073001
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073001 -n old-k8s-version-073001: exit status 2 (349.185714ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-073001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-307185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-307185 --alsologtostderr -v=1: exit status 80 (1.950219443s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-307185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:06:11.769351  299917 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:11.769614  299917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:11.769624  299917 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:11.769628  299917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:11.769927  299917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:11.770232  299917 out.go:368] Setting JSON to false
	I1216 03:06:11.770250  299917 mustload.go:66] Loading cluster: no-preload-307185
	I1216 03:06:11.770612  299917 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:11.771080  299917 cli_runner.go:164] Run: docker container inspect no-preload-307185 --format={{.State.Status}}
	I1216 03:06:11.801215  299917 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:06:11.801700  299917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:11.878128  299917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-16 03:06:11.866443652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:11.879002  299917 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-307185 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 03:06:11.881010  299917 out.go:179] * Pausing node no-preload-307185 ... 
	I1216 03:06:11.882330  299917 host.go:66] Checking if "no-preload-307185" exists ...
	I1216 03:06:11.882622  299917 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:11.882660  299917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307185
	I1216 03:06:11.901962  299917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/no-preload-307185/id_rsa Username:docker}
	I1216 03:06:12.000805  299917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:12.014242  299917 pause.go:52] kubelet running: true
	I1216 03:06:12.014333  299917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:12.189083  299917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:12.189174  299917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:12.269677  299917 cri.go:89] found id: "2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a"
	I1216 03:06:12.269698  299917 cri.go:89] found id: "94285617cf8b54b104a40a2dfade211e9ac180dc14e8562e579fbf208e59fc2c"
	I1216 03:06:12.269704  299917 cri.go:89] found id: "d09223677018edcd40caab945085de639b66561c056dd22356b55b19d6d259ea"
	I1216 03:06:12.269709  299917 cri.go:89] found id: "4365bbef2f13c5c7aa94d93c553f4ce3ffaae88a7c74bf0962d0bf1c757570d8"
	I1216 03:06:12.269713  299917 cri.go:89] found id: "e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d"
	I1216 03:06:12.269718  299917 cri.go:89] found id: "2e6734fb43ba86618db99b3ff8e0ff5567d55903f4314fd69151a5b43036b53f"
	I1216 03:06:12.269722  299917 cri.go:89] found id: "dc346b2097f4206bebdfe44fe4d9335f49968aad9c3530faf56f943dcb6b5412"
	I1216 03:06:12.269725  299917 cri.go:89] found id: "5c9b719650721f0b389bbe33b3c2af2b64eb234a8618322edf4c401d8619f6d5"
	I1216 03:06:12.269729  299917 cri.go:89] found id: "28c40fdfc89c1fb851e93c8fa092d28a97ad8d96c9065b7fef28b6ac068fba7d"
	I1216 03:06:12.269739  299917 cri.go:89] found id: "4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	I1216 03:06:12.269744  299917 cri.go:89] found id: "3065dd89e4fc6a715e9767a3192817736dae4892a600b4fcc552158f7134af8e"
	I1216 03:06:12.269748  299917 cri.go:89] found id: ""
	I1216 03:06:12.269788  299917 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:12.282450  299917 retry.go:31] will retry after 268.928339ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:12Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:12.551943  299917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:12.564690  299917 pause.go:52] kubelet running: false
	I1216 03:06:12.564739  299917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:12.746280  299917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:12.746368  299917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:12.846743  299917 cri.go:89] found id: "2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a"
	I1216 03:06:12.846765  299917 cri.go:89] found id: "94285617cf8b54b104a40a2dfade211e9ac180dc14e8562e579fbf208e59fc2c"
	I1216 03:06:12.846771  299917 cri.go:89] found id: "d09223677018edcd40caab945085de639b66561c056dd22356b55b19d6d259ea"
	I1216 03:06:12.846776  299917 cri.go:89] found id: "4365bbef2f13c5c7aa94d93c553f4ce3ffaae88a7c74bf0962d0bf1c757570d8"
	I1216 03:06:12.846782  299917 cri.go:89] found id: "e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d"
	I1216 03:06:12.846788  299917 cri.go:89] found id: "2e6734fb43ba86618db99b3ff8e0ff5567d55903f4314fd69151a5b43036b53f"
	I1216 03:06:12.846792  299917 cri.go:89] found id: "dc346b2097f4206bebdfe44fe4d9335f49968aad9c3530faf56f943dcb6b5412"
	I1216 03:06:12.846797  299917 cri.go:89] found id: "5c9b719650721f0b389bbe33b3c2af2b64eb234a8618322edf4c401d8619f6d5"
	I1216 03:06:12.846802  299917 cri.go:89] found id: "28c40fdfc89c1fb851e93c8fa092d28a97ad8d96c9065b7fef28b6ac068fba7d"
	I1216 03:06:12.846827  299917 cri.go:89] found id: "4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	I1216 03:06:12.846833  299917 cri.go:89] found id: "3065dd89e4fc6a715e9767a3192817736dae4892a600b4fcc552158f7134af8e"
	I1216 03:06:12.846837  299917 cri.go:89] found id: ""
	I1216 03:06:12.846887  299917 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:12.861667  299917 retry.go:31] will retry after 401.777141ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:12Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:13.263656  299917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:13.283472  299917 pause.go:52] kubelet running: false
	I1216 03:06:13.283539  299917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:13.513740  299917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:13.513850  299917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:13.610861  299917 cri.go:89] found id: "2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a"
	I1216 03:06:13.610894  299917 cri.go:89] found id: "94285617cf8b54b104a40a2dfade211e9ac180dc14e8562e579fbf208e59fc2c"
	I1216 03:06:13.610900  299917 cri.go:89] found id: "d09223677018edcd40caab945085de639b66561c056dd22356b55b19d6d259ea"
	I1216 03:06:13.610905  299917 cri.go:89] found id: "4365bbef2f13c5c7aa94d93c553f4ce3ffaae88a7c74bf0962d0bf1c757570d8"
	I1216 03:06:13.610910  299917 cri.go:89] found id: "e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d"
	I1216 03:06:13.610915  299917 cri.go:89] found id: "2e6734fb43ba86618db99b3ff8e0ff5567d55903f4314fd69151a5b43036b53f"
	I1216 03:06:13.610920  299917 cri.go:89] found id: "dc346b2097f4206bebdfe44fe4d9335f49968aad9c3530faf56f943dcb6b5412"
	I1216 03:06:13.610924  299917 cri.go:89] found id: "5c9b719650721f0b389bbe33b3c2af2b64eb234a8618322edf4c401d8619f6d5"
	I1216 03:06:13.610929  299917 cri.go:89] found id: "28c40fdfc89c1fb851e93c8fa092d28a97ad8d96c9065b7fef28b6ac068fba7d"
	I1216 03:06:13.610937  299917 cri.go:89] found id: "4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	I1216 03:06:13.610942  299917 cri.go:89] found id: "3065dd89e4fc6a715e9767a3192817736dae4892a600b4fcc552158f7134af8e"
	I1216 03:06:13.610961  299917 cri.go:89] found id: ""
	I1216 03:06:13.611007  299917 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:13.631530  299917 out.go:203] 
	W1216 03:06:13.634912  299917 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 03:06:13.634941  299917 out.go:285] * 
	* 
	W1216 03:06:13.642101  299917 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:06:13.644401  299917 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-307185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307185
helpers_test.go:244: (dbg) docker inspect no-preload-307185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	        "Created": "2025-12-16T03:03:57.812441327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:12.224802426Z",
	            "FinishedAt": "2025-12-16T03:05:11.289888047Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hostname",
	        "HostsPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hosts",
	        "LogPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db-json.log",
	        "Name": "/no-preload-307185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-307185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	                "LowerDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307185",
	                "Source": "/var/lib/docker/volumes/no-preload-307185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307185",
	                "name.minikube.sigs.k8s.io": "no-preload-307185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "be7196df168589d70d8f71ccab46d6d6e6f9ca92bb9b907f1e3146d6d36b2680",
	            "SandboxKey": "/var/run/docker/netns/be7196df1685",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-307185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90167d09366ac94fe3d8c3c2c088a58bdbd0aa8f97facfeb6de0aac99571708a",
	                    "EndpointID": "069c449a1f92c578abf02bc3995d838a2c58a4864928b4e308afb5151171440a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9e:40:d4:1f:f7:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307185",
	                        "995416161edc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185: exit status 2 (449.881553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-307185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-307185 logs -n 25: (1.553176419s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-073001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ stop    │ -p old-k8s-version-073001 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-307185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │                     │
	│ delete  │ -p running-upgrade-146373                                                                                                                                                                                                                            │ running-upgrade-146373       │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-307185 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:05.682487  296715 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:05.682625  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682640  296715 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:05.682647  296715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:05.682892  296715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:05.683360  296715 out.go:368] Setting JSON to false
	I1216 03:06:05.684657  296715 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2918,"bootTime":1765851448,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:05.684718  296715 start.go:143] virtualization: kvm guest
	I1216 03:06:05.686760  296715 out.go:179] * [default-k8s-diff-port-079165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:05.688136  296715 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:05.688131  296715 notify.go:221] Checking for updates...
	I1216 03:06:05.689857  296715 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:05.691168  296715 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:05.692289  296715 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:05.693461  296715 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:05.694667  296715 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:05.696383  296715 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:05.697017  296715 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:05.726936  296715 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:05.727106  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.790275  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:06:05.779307926 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.790398  296715 docker.go:319] overlay module found
	I1216 03:06:05.796242  296715 out.go:179] * Using the docker driver based on existing profile
	I1216 03:06:05.797483  296715 start.go:309] selected driver: docker
	I1216 03:06:05.797502  296715 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.797606  296715 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:05.798303  296715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:05.854332  296715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 03:06:05.844339288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:05.854654  296715 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:05.854685  296715 cni.go:84] Creating CNI manager for ""
	I1216 03:06:05.854753  296715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:05.854799  296715 start.go:353] cluster config:
	{Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:05.856642  296715 out.go:179] * Starting "default-k8s-diff-port-079165" primary control-plane node in "default-k8s-diff-port-079165" cluster
	I1216 03:06:05.857706  296715 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:05.858792  296715 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:05.859728  296715 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:05.859769  296715 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:05.859782  296715 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:05.859813  296715 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:05.859930  296715 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:05.859945  296715 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:05.860041  296715 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/config.json ...
	I1216 03:06:05.880496  296715 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:05.880518  296715 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:05.880533  296715 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:05.880557  296715 start.go:360] acquireMachinesLock for default-k8s-diff-port-079165: {Name:mk0419493342481d6bebce452e91be7e944f2c45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:05.880618  296715 start.go:364] duration metric: took 43.763µs to acquireMachinesLock for "default-k8s-diff-port-079165"
	I1216 03:06:05.880635  296715 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:06:05.880641  296715 fix.go:54] fixHost starting: 
	I1216 03:06:05.880894  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:05.899360  296715 fix.go:112] recreateIfNeeded on default-k8s-diff-port-079165: state=Stopped err=<nil>
	W1216 03:06:05.899392  296715 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:06:05.901893  296715 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-079165" ...
	I1216 03:06:05.901963  296715 cli_runner.go:164] Run: docker start default-k8s-diff-port-079165
	I1216 03:06:06.181165  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:06.204360  296715 kic.go:430] container "default-k8s-diff-port-079165" state is running.
	I1216 03:06:06.204775  296715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079165
	I1216 03:06:06.228722  296715 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/config.json ...
	I1216 03:06:06.229039  296715 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:06.229114  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:06.253047  296715 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:06.253386  296715 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1216 03:06:06.253408  296715 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:06.254218  296715 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58444->127.0.0.1:33088: read: connection reset by peer
	I1216 03:06:09.399569  296715 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079165
	
	I1216 03:06:09.399599  296715 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-079165"
	I1216 03:06:09.399689  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:09.420317  296715 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:09.420553  296715 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1216 03:06:09.420569  296715 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-079165 && echo "default-k8s-diff-port-079165" | sudo tee /etc/hostname
	I1216 03:06:09.569846  296715 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-079165
	
	I1216 03:06:09.569955  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:09.589067  296715 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:09.589348  296715 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1216 03:06:09.589370  296715 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-079165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-079165/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-079165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:09.727671  296715 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:09.727699  296715 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:09.727717  296715 ubuntu.go:190] setting up certificates
	I1216 03:06:09.727725  296715 provision.go:84] configureAuth start
	I1216 03:06:09.727764  296715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079165
	I1216 03:06:09.749046  296715 provision.go:143] copyHostCerts
	I1216 03:06:09.749109  296715 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:09.749119  296715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:09.749205  296715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:09.749320  296715 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:09.749332  296715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:09.749373  296715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:09.749451  296715 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:09.749462  296715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:09.749497  296715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:09.749570  296715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-079165 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-079165 localhost minikube]
	I1216 03:06:09.793013  296715 provision.go:177] copyRemoteCerts
	I1216 03:06:09.793073  296715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:09.793104  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:09.811879  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:09.914063  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:09.932188  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 03:06:09.950029  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:06:09.967292  296715 provision.go:87] duration metric: took 239.551789ms to configureAuth
	I1216 03:06:09.967318  296715 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:09.967480  296715 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:09.967580  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:09.985940  296715 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:09.986178  296715 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1216 03:06:09.986207  296715 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:10.327580  296715 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:10.327605  296715 machine.go:97] duration metric: took 4.098547386s to provisionDockerMachine
	I1216 03:06:10.327619  296715 start.go:293] postStartSetup for "default-k8s-diff-port-079165" (driver="docker")
	I1216 03:06:10.327634  296715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:10.327693  296715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:10.327759  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:10.349547  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:10.452448  296715 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:10.456607  296715 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:10.456636  296715 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:10.456648  296715 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:10.456717  296715 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:10.456887  296715 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:10.457042  296715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:10.465772  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:10.486144  296715 start.go:296] duration metric: took 158.507788ms for postStartSetup
	I1216 03:06:10.486232  296715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:10.486315  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:10.506361  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:10.602717  296715 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:10.607185  296715 fix.go:56] duration metric: took 4.726539044s for fixHost
	I1216 03:06:10.607214  296715 start.go:83] releasing machines lock for "default-k8s-diff-port-079165", held for 4.726584486s
	I1216 03:06:10.607283  296715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-079165
	I1216 03:06:10.626226  296715 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:10.626309  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:10.626328  296715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:10.626392  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:10.647336  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:10.647674  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:10.800566  296715 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:10.807854  296715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:10.846514  296715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:10.851597  296715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:10.851665  296715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:10.860404  296715 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 03:06:10.860424  296715 start.go:496] detecting cgroup driver to use...
	I1216 03:06:10.860462  296715 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:10.860508  296715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:10.876794  296715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:10.889829  296715 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:10.889927  296715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:10.907042  296715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:10.920631  296715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:11.016567  296715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:11.097573  296715 docker.go:234] disabling docker service ...
	I1216 03:06:11.097630  296715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:11.112056  296715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:11.126725  296715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:11.223311  296715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:11.314104  296715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:11.328350  296715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:11.343882  296715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:11.343930  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.353146  296715 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:11.353224  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.362885  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.372839  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.383569  296715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:11.393097  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.403743  296715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.412885  296715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:11.422962  296715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:11.432253  296715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:11.441195  296715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:11.540338  296715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:06:11.685753  296715 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:11.685869  296715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:11.690378  296715 start.go:564] Will wait 60s for crictl version
	I1216 03:06:11.690425  296715 ssh_runner.go:195] Run: which crictl
	I1216 03:06:11.694078  296715 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:11.721115  296715 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:11.721216  296715 ssh_runner.go:195] Run: crio --version
	I1216 03:06:11.751414  296715 ssh_runner.go:195] Run: crio --version
	I1216 03:06:11.786705  296715 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:11.788168  296715 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-079165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:11.820644  296715 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:11.828408  296715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:11.842751  296715 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:11.842942  296715 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:11.843004  296715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:11.881047  296715 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:11.881064  296715 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:11.881115  296715 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:11.908550  296715 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:11.908574  296715 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:11.908583  296715 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1216 03:06:11.908701  296715 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-079165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:11.908769  296715 ssh_runner.go:195] Run: crio config
	I1216 03:06:11.957704  296715 cni.go:84] Creating CNI manager for ""
	I1216 03:06:11.957726  296715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:11.957741  296715 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:11.957759  296715 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-079165 NodeName:default-k8s-diff-port-079165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:11.957906  296715 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-079165"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:11.957974  296715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:11.966429  296715 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:11.966488  296715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:11.974243  296715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 03:06:11.986956  296715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:12.000421  296715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 03:06:12.014061  296715 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:12.017931  296715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:12.027971  296715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:12.114966  296715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:12.138330  296715 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165 for IP: 192.168.85.2
	I1216 03:06:12.138356  296715 certs.go:195] generating shared ca certs ...
	I1216 03:06:12.138372  296715 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:12.138534  296715 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:12.138586  296715 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:12.138599  296715 certs.go:257] generating profile certs ...
	I1216 03:06:12.138709  296715 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/client.key
	I1216 03:06:12.138791  296715 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/apiserver.key.138a830f
	I1216 03:06:12.138875  296715 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/proxy-client.key
	I1216 03:06:12.138994  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:12.139028  296715 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:12.139050  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:12.139088  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:12.139114  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:12.139138  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:12.139197  296715 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:12.139788  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:12.159017  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:12.178672  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:12.199420  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:12.226565  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 03:06:12.247114  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:12.265908  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:12.284953  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/default-k8s-diff-port-079165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:06:12.302452  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:12.319423  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:12.336956  296715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:12.354464  296715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:12.366957  296715 ssh_runner.go:195] Run: openssl version
	I1216 03:06:12.373450  296715 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:12.380655  296715 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:12.387772  296715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:12.391580  296715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:12.391629  296715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:12.426536  296715 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:12.434534  296715 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:12.441957  296715 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:12.449558  296715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:12.453244  296715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:12.453299  296715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:12.487492  296715 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:12.495339  296715 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:12.502867  296715 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:12.510297  296715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:12.513989  296715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:12.514054  296715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:12.550278  296715 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:12.558369  296715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:12.562169  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:06:12.600837  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:06:12.638370  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:06:12.684861  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:06:12.734305  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:06:12.793188  296715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:06:12.854444  296715 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-079165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-079165 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:12.854553  296715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:12.854616  296715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:12.889344  296715 cri.go:89] found id: "7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301"
	I1216 03:06:12.889365  296715 cri.go:89] found id: "8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055"
	I1216 03:06:12.889371  296715 cri.go:89] found id: "f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09"
	I1216 03:06:12.889379  296715 cri.go:89] found id: "9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f"
	I1216 03:06:12.889383  296715 cri.go:89] found id: ""
	I1216 03:06:12.889430  296715 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 03:06:12.902913  296715 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:12Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:12.902977  296715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:12.911524  296715 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 03:06:12.911543  296715 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 03:06:12.911585  296715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:06:12.921185  296715 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:06:12.921989  296715 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-079165" does not appear in /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:12.922535  296715 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-5058/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-079165" cluster setting kubeconfig missing "default-k8s-diff-port-079165" context setting]
	I1216 03:06:12.923437  296715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:12.925564  296715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:06:12.934414  296715 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 03:06:12.934449  296715 kubeadm.go:602] duration metric: took 22.90044ms to restartPrimaryControlPlane
	I1216 03:06:12.934460  296715 kubeadm.go:403] duration metric: took 80.027921ms to StartCluster
	I1216 03:06:12.934477  296715 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:12.934547  296715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:12.936645  296715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:12.936947  296715 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:12.937190  296715 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:12.937074  296715 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:12.937253  296715 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-079165"
	I1216 03:06:12.937268  296715 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-079165"
	W1216 03:06:12.937279  296715 addons.go:248] addon storage-provisioner should already be in state true
	I1216 03:06:12.937291  296715 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-079165"
	I1216 03:06:12.937327  296715 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-079165"
	W1216 03:06:12.937338  296715 addons.go:248] addon dashboard should already be in state true
	I1216 03:06:12.937347  296715 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-079165"
	I1216 03:06:12.937387  296715 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-079165"
	I1216 03:06:12.937395  296715 host.go:66] Checking if "default-k8s-diff-port-079165" exists ...
	I1216 03:06:12.937305  296715 host.go:66] Checking if "default-k8s-diff-port-079165" exists ...
	I1216 03:06:12.937776  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:12.937955  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:12.937985  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:12.939812  296715 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:12.942115  296715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:12.967062  296715 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:12.968291  296715 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-079165"
	W1216 03:06:12.968362  296715 addons.go:248] addon default-storageclass should already be in state true
	I1216 03:06:12.968392  296715 host.go:66] Checking if "default-k8s-diff-port-079165" exists ...
	I1216 03:06:12.968605  296715 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 03:06:12.968719  296715 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:12.968761  296715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:12.968846  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:12.968879  296715 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:06:12.970711  296715 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 16 03:05:40 no-preload-307185 crio[557]: time="2025-12-16T03:05:40.994343717Z" level=info msg="Started container" PID=1735 containerID=79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper id=851403a7-9e3a-444d-a23c-58343182330d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6839bf3817f90a00acf57d2a2707f9ccfdc62183685a59f007fcc194d75c4abd
	Dec 16 03:05:41 no-preload-307185 crio[557]: time="2025-12-16T03:05:41.029420678Z" level=info msg="Removing container: ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e" id=000acc7d-234d-4349-9845-1fc0fc6d1c49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:41 no-preload-307185 crio[557]: time="2025-12-16T03:05:41.040113822Z" level=info msg="Removed container ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=000acc7d-234d-4349-9845-1fc0fc6d1c49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.06104322Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2002203e-1971-4fac-a7cc-c38ab1c853ad name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.062063097Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12376945-0771-4b99-b6c6-5dbaf64be497 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.063171641Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f11dbf6e-b774-4d85-a474-21c2b2685736 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.063320525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069403131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069562554Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6c52ceeb21d7eea6a3b87b32b4a9a4cbbff221dc23519ca74986e166354c9c97/merged/etc/passwd: no such file or directory"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069587883Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6c52ceeb21d7eea6a3b87b32b4a9a4cbbff221dc23519ca74986e166354c9c97/merged/etc/group: no such file or directory"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069807803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.099055358Z" level=info msg="Created container 2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a: kube-system/storage-provisioner/storage-provisioner" id=f11dbf6e-b774-4d85-a474-21c2b2685736 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.099687407Z" level=info msg="Starting container: 2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a" id=48aa9ef2-9db5-4130-a53b-3410397c7ef7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.101864077Z" level=info msg="Started container" PID=1749 containerID=2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a description=kube-system/storage-provisioner/storage-provisioner id=48aa9ef2-9db5-4130-a53b-3410397c7ef7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab14171acf5e07541184c2e318b8928f61cdaca6b9a3cff649d5cc14fbde78a1
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.94023851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23cc4713-c158-4c20-acd3-303595748d5b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.941229568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63690135-5af9-4a1c-b0f1-db665867321f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.942246077Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=5d93a1b0-abfd-4907-9627-7e63f8260c31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.942365463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.948067136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.948706539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.980913189Z" level=info msg="Created container 4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=5d93a1b0-abfd-4907-9627-7e63f8260c31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.981444098Z" level=info msg="Starting container: 4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf" id=1dafe344-9787-4b68-8a4a-7b47c78fcd91 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.983284729Z" level=info msg="Started container" PID=1782 containerID=4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper id=1dafe344-9787-4b68-8a4a-7b47c78fcd91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6839bf3817f90a00acf57d2a2707f9ccfdc62183685a59f007fcc194d75c4abd
	Dec 16 03:06:02 no-preload-307185 crio[557]: time="2025-12-16T03:06:02.089421319Z" level=info msg="Removing container: 79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7" id=f3a9cca5-a6ce-46c6-b74a-e56606befb1d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:02 no-preload-307185 crio[557]: time="2025-12-16T03:06:02.099967478Z" level=info msg="Removed container 79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=f3a9cca5-a6ce-46c6-b74a-e56606befb1d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4e18d0819280b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   6839bf3817f90       dashboard-metrics-scraper-867fb5f87b-vmsw5   kubernetes-dashboard
	2c2bb01661821       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   ab14171acf5e0       storage-provisioner                          kube-system
	3065dd89e4fc6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   c25ac824fc93f       kubernetes-dashboard-b84665fb8-ddfzf         kubernetes-dashboard
	94285617cf8b5       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   54a1498437803       coredns-7d764666f9-nm9bc                     kube-system
	9e5484afa2984       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   c9ab44d77b44e       busybox                                      default
	d09223677018e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           52 seconds ago      Running             kube-proxy                  0                   52f5499697c4b       kube-proxy-tp2h2                             kube-system
	4365bbef2f13c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   5c5d3d8c6f05c       kindnet-7zn78                                kube-system
	e47745c0def4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   ab14171acf5e0       storage-provisioner                          kube-system
	2e6734fb43ba8       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           55 seconds ago      Running             kube-controller-manager     0                   2e218841a22a6       kube-controller-manager-no-preload-307185    kube-system
	dc346b2097f42       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   9d38aebe7d870       etcd-no-preload-307185                       kube-system
	5c9b719650721       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           55 seconds ago      Running             kube-apiserver              0                   ef8a29d47177d       kube-apiserver-no-preload-307185             kube-system
	28c40fdfc89c1       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           55 seconds ago      Running             kube-scheduler              0                   c5ba172a3f533       kube-scheduler-no-preload-307185             kube-system
	
	
	==> coredns [94285617cf8b54b104a40a2dfade211e9ac180dc14e8562e579fbf208e59fc2c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] 127.0.0.1:52941 - 31358 "HINFO IN 5313865864928815661.6593799915876552692. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015339812s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-307185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-307185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=no-preload-307185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-307185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:06:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-307185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                a794d9e9-b632-4191-ab05-a56c4459c52f
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-nm9bc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-307185                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-7zn78                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-307185              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-307185     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-tp2h2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-307185              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-vmsw5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-ddfzf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-307185 event: Registered Node no-preload-307185 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-307185 event: Registered Node no-preload-307185 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [dc346b2097f4206bebdfe44fe4d9335f49968aad9c3530faf56f943dcb6b5412] <==
	{"level":"warn","ts":"2025-12-16T03:05:20.759788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.766060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.772419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.778745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.786342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.792354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.798844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.805939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.817963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.824801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.833135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.841246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.848343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.856228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.862849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.869313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.875915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.882554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.889291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.906493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.912950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.919603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.926192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.976518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:05:43.856866Z","caller":"traceutil/trace.go:172","msg":"trace[1725682307] transaction","detail":"{read_only:false; response_revision:660; number_of_response:1; }","duration":"174.126644ms","start":"2025-12-16T03:05:43.682722Z","end":"2025-12-16T03:05:43.856849Z","steps":["trace[1725682307] 'process raft request'  (duration: 169.167386ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:06:15 up 48 min,  0 user,  load average: 3.95, 3.01, 1.97
	Linux no-preload-307185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4365bbef2f13c5c7aa94d93c553f4ce3ffaae88a7c74bf0962d0bf1c757570d8] <==
	I1216 03:05:22.495750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:05:22.496038       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 03:05:22.496217       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:05:22.496239       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:05:22.496253       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:05:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:05:22.699863       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:05:22.767562       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:05:22.767582       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:05:22.793216       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:05:23.167556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:05:23.167652       1 metrics.go:72] Registering metrics
	I1216 03:05:23.167765       1 controller.go:711] "Syncing nftables rules"
	I1216 03:05:32.697969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:32.698040       1 main.go:301] handling current node
	I1216 03:05:42.697572       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:42.697602       1 main.go:301] handling current node
	I1216 03:05:52.697749       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:52.697794       1 main.go:301] handling current node
	I1216 03:06:02.697792       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:06:02.697838       1 main.go:301] handling current node
	I1216 03:06:12.704939       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:06:12.704982       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5c9b719650721f0b389bbe33b3c2af2b64eb234a8618322edf4c401d8619f6d5] <==
	I1216 03:05:21.455721       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:05:21.455747       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:21.455771       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:21.458087       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:05:21.459408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:05:21.460405       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:05:21.458516       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:21.467379       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:05:21.511518       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:05:21.520754       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:21.543578       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:05:21.543793       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:05:21.548605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1216 03:05:21.549457       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 03:05:21.756617       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:05:21.783669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:05:21.802965       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:21.810651       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:21.819770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:05:21.850779       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.132.4"}
	I1216 03:05:21.861107       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.157.203"}
	I1216 03:05:22.348551       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:05:25.044697       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:05:25.242052       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:05:25.295640       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2e6734fb43ba86618db99b3ff8e0ff5567d55903f4314fd69151a5b43036b53f] <==
	I1216 03:05:24.598857       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598881       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598949       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.599172       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598193       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598203       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598802       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598952       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598828       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598839       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.599995       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600038       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600080       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600135       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:05:24.600199       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600251       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-307185"
	I1216 03:05:24.600296       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1216 03:05:24.601060       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.602461       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.607186       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:24.695926       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.695961       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:05:24.695969       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:05:24.707560       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:25.305120       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [d09223677018edcd40caab945085de639b66561c056dd22356b55b19d6d259ea] <==
	I1216 03:05:22.342551       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:05:22.413184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:22.513810       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:22.513868       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 03:05:22.513988       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:05:22.536706       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:05:22.536790       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:05:22.543239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:05:22.543572       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:05:22.543656       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:22.545865       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:05:22.545931       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:05:22.545979       1 config.go:200] "Starting service config controller"
	I1216 03:05:22.546426       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:05:22.546387       1 config.go:309] "Starting node config controller"
	I1216 03:05:22.546586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:05:22.546597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:05:22.546143       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:05:22.546607       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:05:22.647028       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:05:22.647052       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:05:22.647096       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [28c40fdfc89c1fb851e93c8fa092d28a97ad8d96c9065b7fef28b6ac068fba7d] <==
	I1216 03:05:21.429141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:05:21.429237       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:21.429867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:05:21.429992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 03:05:21.443697       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.445230       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.445798       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.447084       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	E1216 03:05:21.447151       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460180       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460555       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]"
	E1216 03:05:21.461006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461150       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461166       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461364       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461381       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1216 03:05:21.461562       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461599       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461898       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 03:05:21.462043       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1216 03:05:21.462136       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1216 03:05:21.462304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1216 03:05:21.529856       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:05:40 no-preload-307185 kubelet[709]: E1216 03:05:40.940235     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:40 no-preload-307185 kubelet[709]: I1216 03:05:40.940283     709 scope.go:122] "RemoveContainer" containerID="ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: I1216 03:05:41.028018     709 scope.go:122] "RemoveContainer" containerID="ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: E1216 03:05:41.028260     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: I1216 03:05:41.028290     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: E1216 03:05:41.028479     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: E1216 03:05:43.675776     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: I1216 03:05:43.675814     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: E1216 03:05:43.676032     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:05:53 no-preload-307185 kubelet[709]: I1216 03:05:53.060562     709 scope.go:122] "RemoveContainer" containerID="e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d"
	Dec 16 03:05:58 no-preload-307185 kubelet[709]: E1216 03:05:58.052337     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nm9bc" containerName="coredns"
	Dec 16 03:06:01 no-preload-307185 kubelet[709]: E1216 03:06:01.939638     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:01 no-preload-307185 kubelet[709]: I1216 03:06:01.939681     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: I1216 03:06:02.087989     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: E1216 03:06:02.088219     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: I1216 03:06:02.088267     709 scope.go:122] "RemoveContainer" containerID="4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: E1216 03:06:02.088463     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: E1216 03:06:03.675465     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: I1216 03:06:03.675505     709 scope.go:122] "RemoveContainer" containerID="4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: E1216 03:06:03.675696     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:06:12 no-preload-307185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:12 no-preload-307185 kubelet[709]: I1216 03:06:12.167315     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 03:06:12 no-preload-307185 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:12 no-preload-307185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:06:12 no-preload-307185 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [3065dd89e4fc6a715e9767a3192817736dae4892a600b4fcc552158f7134af8e] <==
	2025/12/16 03:05:31 Starting overwatch
	2025/12/16 03:05:31 Using namespace: kubernetes-dashboard
	2025/12/16 03:05:31 Using in-cluster config to connect to apiserver
	2025/12/16 03:05:31 Using secret token for csrf signing
	2025/12/16 03:05:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:05:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:05:31 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/16 03:05:31 Generating JWE encryption key
	2025/12/16 03:05:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:05:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:05:31 Initializing JWE encryption key from synchronized object
	2025/12/16 03:05:31 Creating in-cluster Sidecar client
	2025/12/16 03:05:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:05:31 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a] <==
	I1216 03:05:53.117108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:05:53.127039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:05:53.127101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:05:53.129638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:56.585490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:00.845835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:04.444432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:07.497707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.520840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.526053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:10.526200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:06:10.526275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81d1dfde-a7b2-428c-90b5-bc639acfdd4f", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775 became leader
	I1216 03:06:10.526385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775!
	W1216 03:06:10.528480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.531783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:10.627470       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775!
	W1216 03:06:12.535726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:12.539772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:14.555302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:14.575553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d] <==
	I1216 03:05:22.295629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:05:52.298281       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-307185 -n no-preload-307185: exit status 2 (459.51313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-307185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307185
helpers_test.go:244: (dbg) docker inspect no-preload-307185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	        "Created": "2025-12-16T03:03:57.812441327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:05:12.224802426Z",
	            "FinishedAt": "2025-12-16T03:05:11.289888047Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hostname",
	        "HostsPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/hosts",
	        "LogPath": "/var/lib/docker/containers/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db/995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db-json.log",
	        "Name": "/no-preload-307185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-307185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "995416161edcd80a35f4e2fd95a891fa254f629df43e028a30dd0b4c04f5d4db",
	                "LowerDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a70e7c67c94fdb71d71b4950853af76fc2cb03ac7a617cfe6d1af40bfd159329/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307185",
	                "Source": "/var/lib/docker/volumes/no-preload-307185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307185",
	                "name.minikube.sigs.k8s.io": "no-preload-307185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "be7196df168589d70d8f71ccab46d6d6e6f9ca92bb9b907f1e3146d6d36b2680",
	            "SandboxKey": "/var/run/docker/netns/be7196df1685",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-307185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90167d09366ac94fe3d8c3c2c088a58bdbd0aa8f97facfeb6de0aac99571708a",
	                    "EndpointID": "069c449a1f92c578abf02bc3995d838a2c58a4864928b4e308afb5151171440a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9e:40:d4:1f:f7:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307185",
	                        "995416161edc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185: exit status 2 (358.569867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-307185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-307185 logs -n 25: (1.217205639s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:04 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:15.181787  301866 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:15.182149  301866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:15.182162  301866 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:15.182169  301866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:15.182519  301866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:15.183183  301866 out.go:368] Setting JSON to false
	I1216 03:06:15.184585  301866 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2927,"bootTime":1765851448,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:15.184646  301866 start.go:143] virtualization: kvm guest
	I1216 03:06:15.186228  301866 out.go:179] * [embed-certs-742794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:15.187929  301866 notify.go:221] Checking for updates...
	I1216 03:06:15.188004  301866 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:15.189467  301866 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:15.191000  301866 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:15.192326  301866 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:15.193621  301866 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:15.195153  301866 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:15.196769  301866 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:15.197002  301866 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:15.197121  301866 config.go:182] Loaded profile config "no-preload-307185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:15.197231  301866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:15.231524  301866 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:15.231694  301866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:15.324409  301866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:73 SystemTime:2025-12-16 03:06:15.311052875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:15.324561  301866 docker.go:319] overlay module found
	I1216 03:06:15.326571  301866 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:15.328044  301866 start.go:309] selected driver: docker
	I1216 03:06:15.328063  301866 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:15.328080  301866 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:15.328963  301866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:15.416069  301866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-16 03:06:15.403792429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:15.416268  301866 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:15.416554  301866 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:15.418725  301866 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:15.419874  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:15.419952  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:15.419962  301866 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:15.420046  301866 start.go:353] cluster config:
	{Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:15.421410  301866 out.go:179] * Starting "embed-certs-742794" primary control-plane node in "embed-certs-742794" cluster
	I1216 03:06:15.423555  301866 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:15.424793  301866 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:15.426063  301866 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:15.426108  301866 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:15.426121  301866 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:15.426133  301866 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:15.426275  301866 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:15.426292  301866 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:15.426417  301866 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/config.json ...
	I1216 03:06:15.426446  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/config.json: {Name:mka4b00386af6469ebcfee4c222d4f53e19bab36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:15.449271  301866 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:15.449289  301866 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:15.449306  301866 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:15.449341  301866 start.go:360] acquireMachinesLock for embed-certs-742794: {Name:mkaeec364553f15dc2fc0c32d488bcc5b53ef2eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:15.449456  301866 start.go:364] duration metric: took 93.41µs to acquireMachinesLock for "embed-certs-742794"
	I1216 03:06:15.449495  301866 start.go:93] Provisioning new machine with config: &{Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:15.449576  301866 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:12.972424  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 03:06:12.972445  296715 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 03:06:12.972504  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:12.996791  296715 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:12.996923  296715 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:12.997072  296715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:06:13.003661  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:13.006566  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:13.029965  296715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:06:13.106552  296715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:13.122052  296715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-079165" to be "Ready" ...
	I1216 03:06:13.124992  296715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:13.127230  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 03:06:13.127257  296715 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 03:06:13.142011  296715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:13.143946  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 03:06:13.143973  296715 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 03:06:13.162373  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 03:06:13.162398  296715 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 03:06:13.179281  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 03:06:13.179303  296715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 03:06:13.197448  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 03:06:13.197473  296715 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 03:06:13.214547  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 03:06:13.214574  296715 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 03:06:13.230139  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 03:06:13.230165  296715 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 03:06:13.258125  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 03:06:13.258156  296715 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 03:06:13.286595  296715 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:13.286621  296715 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 03:06:13.310371  296715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:14.738171  296715 node_ready.go:49] node "default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:14.738203  296715 node_ready.go:38] duration metric: took 1.616120403s for node "default-k8s-diff-port-079165" to be "Ready" ...
	I1216 03:06:14.738226  296715 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:14.738280  296715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:15.478929  296715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.353852243s)
	I1216 03:06:15.479042  296715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.336997409s)
	I1216 03:06:15.479291  296715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.168880116s)
	I1216 03:06:15.479335  296715 api_server.go:72] duration metric: took 2.542353276s to wait for apiserver process to appear ...
	I1216 03:06:15.479358  296715 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:15.479383  296715 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1216 03:06:15.481096  296715 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-079165 addons enable metrics-server
	
	I1216 03:06:15.485114  296715 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:15.485198  296715 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:15.487660  296715 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 03:06:15.488734  296715 addons.go:530] duration metric: took 2.551667697s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Dec 16 03:05:40 no-preload-307185 crio[557]: time="2025-12-16T03:05:40.994343717Z" level=info msg="Started container" PID=1735 containerID=79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper id=851403a7-9e3a-444d-a23c-58343182330d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6839bf3817f90a00acf57d2a2707f9ccfdc62183685a59f007fcc194d75c4abd
	Dec 16 03:05:41 no-preload-307185 crio[557]: time="2025-12-16T03:05:41.029420678Z" level=info msg="Removing container: ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e" id=000acc7d-234d-4349-9845-1fc0fc6d1c49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:41 no-preload-307185 crio[557]: time="2025-12-16T03:05:41.040113822Z" level=info msg="Removed container ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=000acc7d-234d-4349-9845-1fc0fc6d1c49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.06104322Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2002203e-1971-4fac-a7cc-c38ab1c853ad name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.062063097Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=12376945-0771-4b99-b6c6-5dbaf64be497 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.063171641Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f11dbf6e-b774-4d85-a474-21c2b2685736 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.063320525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069403131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069562554Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6c52ceeb21d7eea6a3b87b32b4a9a4cbbff221dc23519ca74986e166354c9c97/merged/etc/passwd: no such file or directory"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069587883Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6c52ceeb21d7eea6a3b87b32b4a9a4cbbff221dc23519ca74986e166354c9c97/merged/etc/group: no such file or directory"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.069807803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.099055358Z" level=info msg="Created container 2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a: kube-system/storage-provisioner/storage-provisioner" id=f11dbf6e-b774-4d85-a474-21c2b2685736 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.099687407Z" level=info msg="Starting container: 2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a" id=48aa9ef2-9db5-4130-a53b-3410397c7ef7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:05:53 no-preload-307185 crio[557]: time="2025-12-16T03:05:53.101864077Z" level=info msg="Started container" PID=1749 containerID=2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a description=kube-system/storage-provisioner/storage-provisioner id=48aa9ef2-9db5-4130-a53b-3410397c7ef7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab14171acf5e07541184c2e318b8928f61cdaca6b9a3cff649d5cc14fbde78a1
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.94023851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23cc4713-c158-4c20-acd3-303595748d5b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.941229568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63690135-5af9-4a1c-b0f1-db665867321f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.942246077Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=5d93a1b0-abfd-4907-9627-7e63f8260c31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.942365463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.948067136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.948706539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.980913189Z" level=info msg="Created container 4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=5d93a1b0-abfd-4907-9627-7e63f8260c31 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.981444098Z" level=info msg="Starting container: 4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf" id=1dafe344-9787-4b68-8a4a-7b47c78fcd91 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:01 no-preload-307185 crio[557]: time="2025-12-16T03:06:01.983284729Z" level=info msg="Started container" PID=1782 containerID=4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper id=1dafe344-9787-4b68-8a4a-7b47c78fcd91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6839bf3817f90a00acf57d2a2707f9ccfdc62183685a59f007fcc194d75c4abd
	Dec 16 03:06:02 no-preload-307185 crio[557]: time="2025-12-16T03:06:02.089421319Z" level=info msg="Removing container: 79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7" id=f3a9cca5-a6ce-46c6-b74a-e56606befb1d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:02 no-preload-307185 crio[557]: time="2025-12-16T03:06:02.099967478Z" level=info msg="Removed container 79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5/dashboard-metrics-scraper" id=f3a9cca5-a6ce-46c6-b74a-e56606befb1d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4e18d0819280b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   3                   6839bf3817f90       dashboard-metrics-scraper-867fb5f87b-vmsw5   kubernetes-dashboard
	2c2bb01661821       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   ab14171acf5e0       storage-provisioner                          kube-system
	3065dd89e4fc6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   c25ac824fc93f       kubernetes-dashboard-b84665fb8-ddfzf         kubernetes-dashboard
	94285617cf8b5       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   54a1498437803       coredns-7d764666f9-nm9bc                     kube-system
	9e5484afa2984       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   c9ab44d77b44e       busybox                                      default
	d09223677018e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           55 seconds ago      Running             kube-proxy                  0                   52f5499697c4b       kube-proxy-tp2h2                             kube-system
	4365bbef2f13c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   5c5d3d8c6f05c       kindnet-7zn78                                kube-system
	e47745c0def4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   ab14171acf5e0       storage-provisioner                          kube-system
	2e6734fb43ba8       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           57 seconds ago      Running             kube-controller-manager     0                   2e218841a22a6       kube-controller-manager-no-preload-307185    kube-system
	dc346b2097f42       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   9d38aebe7d870       etcd-no-preload-307185                       kube-system
	5c9b719650721       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           57 seconds ago      Running             kube-apiserver              0                   ef8a29d47177d       kube-apiserver-no-preload-307185             kube-system
	28c40fdfc89c1       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           57 seconds ago      Running             kube-scheduler              0                   c5ba172a3f533       kube-scheduler-no-preload-307185             kube-system
	
	
	==> coredns [94285617cf8b54b104a40a2dfade211e9ac180dc14e8562e579fbf208e59fc2c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] 127.0.0.1:52941 - 31358 "HINFO IN 5313865864928815661.6593799915876552692. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015339812s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-307185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-307185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=no-preload-307185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_04_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:04:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-307185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:06:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:05:52 +0000   Tue, 16 Dec 2025 03:04:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-307185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                a794d9e9-b632-4191-ab05-a56c4459c52f
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-nm9bc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-307185                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-7zn78                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-307185              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-307185     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-tp2h2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-307185              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-vmsw5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-ddfzf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-307185 event: Registered Node no-preload-307185 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-307185 event: Registered Node no-preload-307185 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [dc346b2097f4206bebdfe44fe4d9335f49968aad9c3530faf56f943dcb6b5412] <==
	{"level":"warn","ts":"2025-12-16T03:05:20.759788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.766060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.772419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.778745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.786342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.792354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.798844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.805939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.817963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.824801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.833135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.841246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.848343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.856228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.862849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.869313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.875915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.882554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.889291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.906493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.912950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.919603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.926192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:05:20.976518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:05:43.856866Z","caller":"traceutil/trace.go:172","msg":"trace[1725682307] transaction","detail":"{read_only:false; response_revision:660; number_of_response:1; }","duration":"174.126644ms","start":"2025-12-16T03:05:43.682722Z","end":"2025-12-16T03:05:43.856849Z","steps":["trace[1725682307] 'process raft request'  (duration: 169.167386ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:06:17 up 48 min,  0 user,  load average: 3.95, 3.01, 1.97
	Linux no-preload-307185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4365bbef2f13c5c7aa94d93c553f4ce3ffaae88a7c74bf0962d0bf1c757570d8] <==
	I1216 03:05:22.495750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:05:22.496038       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 03:05:22.496217       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:05:22.496239       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:05:22.496253       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:05:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:05:22.699863       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:05:22.767562       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:05:22.767582       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:05:22.793216       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:05:23.167556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:05:23.167652       1 metrics.go:72] Registering metrics
	I1216 03:05:23.167765       1 controller.go:711] "Syncing nftables rules"
	I1216 03:05:32.697969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:32.698040       1 main.go:301] handling current node
	I1216 03:05:42.697572       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:42.697602       1 main.go:301] handling current node
	I1216 03:05:52.697749       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:05:52.697794       1 main.go:301] handling current node
	I1216 03:06:02.697792       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:06:02.697838       1 main.go:301] handling current node
	I1216 03:06:12.704939       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 03:06:12.704982       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5c9b719650721f0b389bbe33b3c2af2b64eb234a8618322edf4c401d8619f6d5] <==
	I1216 03:05:21.455721       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:05:21.455747       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:05:21.455771       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:05:21.458087       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:05:21.459408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:05:21.460405       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:05:21.458516       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:21.467379       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:05:21.511518       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:05:21.520754       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:05:21.543578       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:05:21.543793       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:05:21.548605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1216 03:05:21.549457       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 03:05:21.756617       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:05:21.783669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:05:21.802965       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:05:21.810651       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:05:21.819770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:05:21.850779       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.132.4"}
	I1216 03:05:21.861107       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.157.203"}
	I1216 03:05:22.348551       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:05:25.044697       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:05:25.242052       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:05:25.295640       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2e6734fb43ba86618db99b3ff8e0ff5567d55903f4314fd69151a5b43036b53f] <==
	I1216 03:05:24.598857       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598881       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598949       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.599172       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598193       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598203       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598802       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598952       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598828       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.598839       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.599995       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600038       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600080       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600135       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:05:24.600199       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.600251       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-307185"
	I1216 03:05:24.600296       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1216 03:05:24.601060       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.602461       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.607186       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:24.695926       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:24.695961       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:05:24.695969       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:05:24.707560       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:25.305120       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [d09223677018edcd40caab945085de639b66561c056dd22356b55b19d6d259ea] <==
	I1216 03:05:22.342551       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:05:22.413184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:22.513810       1 shared_informer.go:377] "Caches are synced"
	I1216 03:05:22.513868       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 03:05:22.513988       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:05:22.536706       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:05:22.536790       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:05:22.543239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:05:22.543572       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:05:22.543656       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:05:22.545865       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:05:22.545931       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:05:22.545979       1 config.go:200] "Starting service config controller"
	I1216 03:05:22.546426       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:05:22.546387       1 config.go:309] "Starting node config controller"
	I1216 03:05:22.546586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:05:22.546597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:05:22.546143       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:05:22.546607       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:05:22.647028       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:05:22.647052       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:05:22.647096       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [28c40fdfc89c1fb851e93c8fa092d28a97ad8d96c9065b7fef28b6ac068fba7d] <==
	I1216 03:05:21.429141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:05:21.429237       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:05:21.429867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:05:21.429992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 03:05:21.443697       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.445230       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.445798       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.447084       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	E1216 03:05:21.447151       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460180       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460555       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.460950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]"
	E1216 03:05:21.461006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461150       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461166       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461364       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461381       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1216 03:05:21.461562       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461599       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]"
	E1216 03:05:21.461898       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 03:05:21.462043       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1216 03:05:21.462136       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1216 03:05:21.462304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1216 03:05:21.529856       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:05:40 no-preload-307185 kubelet[709]: E1216 03:05:40.940235     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:40 no-preload-307185 kubelet[709]: I1216 03:05:40.940283     709 scope.go:122] "RemoveContainer" containerID="ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: I1216 03:05:41.028018     709 scope.go:122] "RemoveContainer" containerID="ea74ea44f30bc55065056e40a225a058748ac82ad00d7ebf71a74f4a0af1ff7e"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: E1216 03:05:41.028260     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: I1216 03:05:41.028290     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:05:41 no-preload-307185 kubelet[709]: E1216 03:05:41.028479     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: E1216 03:05:43.675776     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: I1216 03:05:43.675814     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:05:43 no-preload-307185 kubelet[709]: E1216 03:05:43.676032     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:05:53 no-preload-307185 kubelet[709]: I1216 03:05:53.060562     709 scope.go:122] "RemoveContainer" containerID="e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d"
	Dec 16 03:05:58 no-preload-307185 kubelet[709]: E1216 03:05:58.052337     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nm9bc" containerName="coredns"
	Dec 16 03:06:01 no-preload-307185 kubelet[709]: E1216 03:06:01.939638     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:01 no-preload-307185 kubelet[709]: I1216 03:06:01.939681     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: I1216 03:06:02.087989     709 scope.go:122] "RemoveContainer" containerID="79efbb819f1c277de16a5cd85bd6863d4190ea0c567f90dbe6aeda96ce58c9b7"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: E1216 03:06:02.088219     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: I1216 03:06:02.088267     709 scope.go:122] "RemoveContainer" containerID="4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	Dec 16 03:06:02 no-preload-307185 kubelet[709]: E1216 03:06:02.088463     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: E1216 03:06:03.675465     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" containerName="dashboard-metrics-scraper"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: I1216 03:06:03.675505     709 scope.go:122] "RemoveContainer" containerID="4e18d0819280bc1d7b2206ba327dbf81b7b966207846869e7aae461869091bcf"
	Dec 16 03:06:03 no-preload-307185 kubelet[709]: E1216 03:06:03.675696     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vmsw5_kubernetes-dashboard(ef429846-a4df-4767-ae3c-4e78905e4568)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vmsw5" podUID="ef429846-a4df-4767-ae3c-4e78905e4568"
	Dec 16 03:06:12 no-preload-307185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:12 no-preload-307185 kubelet[709]: I1216 03:06:12.167315     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 03:06:12 no-preload-307185 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:12 no-preload-307185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:06:12 no-preload-307185 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [3065dd89e4fc6a715e9767a3192817736dae4892a600b4fcc552158f7134af8e] <==
	2025/12/16 03:05:31 Starting overwatch
	2025/12/16 03:05:31 Using namespace: kubernetes-dashboard
	2025/12/16 03:05:31 Using in-cluster config to connect to apiserver
	2025/12/16 03:05:31 Using secret token for csrf signing
	2025/12/16 03:05:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:05:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:05:31 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/16 03:05:31 Generating JWE encryption key
	2025/12/16 03:05:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:05:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:05:31 Initializing JWE encryption key from synchronized object
	2025/12/16 03:05:31 Creating in-cluster Sidecar client
	2025/12/16 03:05:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:05:31 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2c2bb0166182135f84bf01ae44b13f751f0605e7b02e46083e9b930ac0d3ee4a] <==
	I1216 03:05:53.117108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:05:53.127039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:05:53.127101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:05:53.129638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:05:56.585490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:00.845835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:04.444432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:07.497707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.520840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.526053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:10.526200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:06:10.526275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81d1dfde-a7b2-428c-90b5-bc639acfdd4f", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775 became leader
	I1216 03:06:10.526385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775!
	W1216 03:06:10.528480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:10.531783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:10.627470       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-307185_24ea0c19-c102-4eff-abfb-df04b930e775!
	W1216 03:06:12.535726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:12.539772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:14.555302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:14.575553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:16.579716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:16.583887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e47745c0def4d7a44acdc19e8a5f1568bf17ecaf826047bda8f65f148468750d] <==
	I1216 03:05:22.295629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:05:52.298281       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-307185 -n no-preload-307185
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-307185 -n no-preload-307185: exit status 2 (346.166216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-307185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-991316 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-991316 --alsologtostderr -v=1: exit status 80 (1.879787049s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-991316 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:06:28.267723  308234 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:28.268144  308234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:28.268158  308234 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:28.268164  308234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:28.268507  308234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:28.268803  308234 out.go:368] Setting JSON to false
	I1216 03:06:28.268877  308234 mustload.go:66] Loading cluster: newest-cni-991316
	I1216 03:06:28.269493  308234 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:28.270153  308234 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:28.313455  308234 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:28.313797  308234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:28.415428  308234 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:100 SystemTime:2025-12-16 03:06:28.401220855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:28.416493  308234 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-991316 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 03:06:28.443960  308234 out.go:179] * Pausing node newest-cni-991316 ... 
	I1216 03:06:28.479430  308234 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:28.479796  308234 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:28.479871  308234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:28.504994  308234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:28.621474  308234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:28.651078  308234 pause.go:52] kubelet running: true
	I1216 03:06:28.651141  308234 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:28.852233  308234 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:28.852331  308234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:28.933402  308234 cri.go:89] found id: "a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303"
	I1216 03:06:28.933425  308234 cri.go:89] found id: "1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c"
	I1216 03:06:28.933432  308234 cri.go:89] found id: "9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3"
	I1216 03:06:28.933437  308234 cri.go:89] found id: "0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc"
	I1216 03:06:28.933442  308234 cri.go:89] found id: "8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f"
	I1216 03:06:28.933450  308234 cri.go:89] found id: "d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef"
	I1216 03:06:28.933455  308234 cri.go:89] found id: ""
	I1216 03:06:28.933496  308234 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:28.945722  308234 retry.go:31] will retry after 258.185346ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:28Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:29.205061  308234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:29.219595  308234 pause.go:52] kubelet running: false
	I1216 03:06:29.219659  308234 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:29.357434  308234 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:29.357507  308234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:29.433324  308234 cri.go:89] found id: "a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303"
	I1216 03:06:29.433345  308234 cri.go:89] found id: "1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c"
	I1216 03:06:29.433351  308234 cri.go:89] found id: "9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3"
	I1216 03:06:29.433355  308234 cri.go:89] found id: "0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc"
	I1216 03:06:29.433360  308234 cri.go:89] found id: "8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f"
	I1216 03:06:29.433365  308234 cri.go:89] found id: "d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef"
	I1216 03:06:29.433370  308234 cri.go:89] found id: ""
	I1216 03:06:29.433413  308234 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:29.447539  308234 retry.go:31] will retry after 360.312675ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:29Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:29.808062  308234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:29.823365  308234 pause.go:52] kubelet running: false
	I1216 03:06:29.823421  308234 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:06:29.962213  308234 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:06:29.962298  308234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:06:30.035855  308234 cri.go:89] found id: "a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303"
	I1216 03:06:30.035875  308234 cri.go:89] found id: "1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c"
	I1216 03:06:30.035880  308234 cri.go:89] found id: "9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3"
	I1216 03:06:30.035885  308234 cri.go:89] found id: "0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc"
	I1216 03:06:30.035890  308234 cri.go:89] found id: "8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f"
	I1216 03:06:30.035895  308234 cri.go:89] found id: "d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef"
	I1216 03:06:30.035899  308234 cri.go:89] found id: ""
	I1216 03:06:30.035937  308234 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:06:30.050164  308234 out.go:203] 
	W1216 03:06:30.051440  308234 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 03:06:30.051463  308234 out.go:285] * 
	* 
	W1216 03:06:30.056124  308234 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:06:30.057384  308234 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-991316 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-991316
helpers_test.go:244: (dbg) docker inspect newest-cni-991316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	        "Created": "2025-12-16T03:05:44.429433316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301948,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:06:15.174095749Z",
	            "FinishedAt": "2025-12-16T03:06:13.949912546Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hosts",
	        "LogPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d-json.log",
	        "Name": "/newest-cni-991316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-991316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-991316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	                "LowerDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-991316",
	                "Source": "/var/lib/docker/volumes/newest-cni-991316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-991316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-991316",
	                "name.minikube.sigs.k8s.io": "newest-cni-991316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe2cb9f3bd270400189b52407d916478b06fc0f50a7b57ad136e1d0c7d2afb30",
	            "SandboxKey": "/var/run/docker/netns/fe2cb9f3bd27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-991316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5f2a89125abbce7b9991af7d91b2faefd2ac42de4f13e650434f1e7fd46fcce",
	                    "EndpointID": "a50ddda68fefc79da98b9964075449fe0cbbdfc36745aa8d9c731ec83a3dc12f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "92:5f:6a:73:72:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-991316",
	                        "4f4fbbe06579"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316: exit status 2 (359.165402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25: (1.062086174s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ newest-cni-991316 image list --format=json                                                                                                                                                                                                           │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p newest-cni-991316 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:22.284329  305678 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:22.284617  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:22.284631  305678 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:22.284638  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:22.284954  305678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:22.285678  305678 out.go:368] Setting JSON to false
	I1216 03:06:22.287282  305678 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2934,"bootTime":1765851448,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:22.287373  305678 start.go:143] virtualization: kvm guest
	I1216 03:06:22.290022  305678 out.go:179] * [auto-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:22.291458  305678 notify.go:221] Checking for updates...
	I1216 03:06:22.292228  305678 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:22.293749  305678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:22.295150  305678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:22.296681  305678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:22.298011  305678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:22.299583  305678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:22.302223  305678 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.302393  305678 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.302539  305678 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:22.302663  305678 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:22.336116  305678 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:22.336268  305678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:22.411201  305678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-16 03:06:22.398711684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:22.411340  305678 docker.go:319] overlay module found
	I1216 03:06:22.414008  305678 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:22.415040  305678 start.go:309] selected driver: docker
	I1216 03:06:22.415058  305678 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:22.415073  305678 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:22.415884  305678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:22.492970  305678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-16 03:06:22.480930076 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:22.493168  305678 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:22.493459  305678 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:22.495083  305678 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:22.496325  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:22.496400  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:22.496415  305678 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:22.496494  305678 start.go:353] cluster config:
	{Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1216 03:06:22.501228  305678 out.go:179] * Starting "auto-646016" primary control-plane node in "auto-646016" cluster
	I1216 03:06:22.502348  305678 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:22.503563  305678 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:22.506039  305678 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:22.506075  305678 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:22.506084  305678 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:22.506150  305678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:22.506209  305678 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:22.506224  305678 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:22.506376  305678 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/config.json ...
	I1216 03:06:22.506409  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/config.json: {Name:mk6894176fd87eb172eff7a30a02ce744943e5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:22.532894  305678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:22.532917  305678 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:22.532932  305678 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:22.532965  305678 start.go:360] acquireMachinesLock for auto-646016: {Name:mk6f07284451993c7ba7d88753d28ad1c708a70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:22.533062  305678 start.go:364] duration metric: took 72.426µs to acquireMachinesLock for "auto-646016"
	I1216 03:06:22.533087  305678 start.go:93] Provisioning new machine with config: &{Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:22.533197  305678 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:22.128033  301603 kubeadm.go:884] updating cluster {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:22.128196  301603 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:06:22.128279  301603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:22.172317  301603 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:22.172344  301603 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:22.172399  301603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:22.207442  301603 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:22.207467  301603 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:22.207477  301603 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 03:06:22.207594  301603 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-991316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:22.207680  301603 ssh_runner.go:195] Run: crio config
	I1216 03:06:22.277901  301603 cni.go:84] Creating CNI manager for ""
	I1216 03:06:22.277928  301603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:22.277945  301603 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 03:06:22.277974  301603 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-991316 NodeName:newest-cni-991316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:22.278189  301603 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-991316"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:22.278269  301603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:06:22.290252  301603 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:22.290338  301603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:22.301484  301603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 03:06:22.322454  301603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:06:22.339874  301603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 03:06:22.357213  301603 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:22.366199  301603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:22.381658  301603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:22.507510  301603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:22.531875  301603 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316 for IP: 192.168.76.2
	I1216 03:06:22.531909  301603 certs.go:195] generating shared ca certs ...
	I1216 03:06:22.531933  301603 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:22.532075  301603 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:22.532345  301603 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:22.532407  301603 certs.go:257] generating profile certs ...
	I1216 03:06:22.532582  301603 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.key
	I1216 03:06:22.533119  301603 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275
	I1216 03:06:22.533264  301603 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key
	I1216 03:06:22.533447  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:22.533495  301603 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:22.533510  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:22.533552  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:22.533589  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:22.533623  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:22.533692  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:22.534586  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:22.560908  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:22.586041  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:22.613702  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:22.647956  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:06:22.681696  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:22.706692  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:22.731333  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:06:22.754198  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:22.778251  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:22.800139  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:22.819225  301603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:22.841853  301603 ssh_runner.go:195] Run: openssl version
	I1216 03:06:22.850174  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.859571  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:22.867979  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.872132  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.872200  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.907564  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:22.916207  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.924618  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:22.932990  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.936876  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.936925  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.977161  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:22.985496  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:22.994688  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:23.005679  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.009859  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.009947  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.056975  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:23.066308  301603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:23.071232  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:06:23.118076  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:06:23.172995  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:06:23.237539  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:06:23.300442  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:06:23.359472  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:06:23.413346  301603 kubeadm.go:401] StartCluster: {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:23.413470  301603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:23.413536  301603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:23.468024  301603 cri.go:89] found id: "9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3"
	I1216 03:06:23.468104  301603 cri.go:89] found id: "0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc"
	I1216 03:06:23.468123  301603 cri.go:89] found id: "8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f"
	I1216 03:06:23.468139  301603 cri.go:89] found id: "d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef"
	I1216 03:06:23.468170  301603 cri.go:89] found id: ""
	I1216 03:06:23.468257  301603 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 03:06:23.485622  301603 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:23Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:23.485700  301603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:23.497091  301603 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 03:06:23.497112  301603 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 03:06:23.497187  301603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:06:23.506664  301603 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:06:23.507425  301603 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-991316" does not appear in /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:23.507813  301603 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-5058/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-991316" cluster setting kubeconfig missing "newest-cni-991316" context setting]
	I1216 03:06:23.509381  301603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.511517  301603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:06:23.522389  301603 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 03:06:23.522490  301603 kubeadm.go:602] duration metric: took 25.367584ms to restartPrimaryControlPlane
	I1216 03:06:23.522539  301603 kubeadm.go:403] duration metric: took 109.200619ms to StartCluster
	I1216 03:06:23.522578  301603 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.522653  301603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:23.523890  301603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.524393  301603 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:23.524550  301603 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:23.524692  301603 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-991316"
	I1216 03:06:23.524727  301603 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:23.524723  301603 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-991316"
	W1216 03:06:23.524879  301603 addons.go:248] addon storage-provisioner should already be in state true
	I1216 03:06:23.524926  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.524740  301603 addons.go:70] Setting dashboard=true in profile "newest-cni-991316"
	I1216 03:06:23.524970  301603 addons.go:239] Setting addon dashboard=true in "newest-cni-991316"
	W1216 03:06:23.524980  301603 addons.go:248] addon dashboard should already be in state true
	I1216 03:06:23.525006  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.524748  301603 addons.go:70] Setting default-storageclass=true in profile "newest-cni-991316"
	I1216 03:06:23.525079  301603 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-991316"
	I1216 03:06:23.525371  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.525403  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.525447  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.527279  301603 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:23.529457  301603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:23.552265  301603 addons.go:239] Setting addon default-storageclass=true in "newest-cni-991316"
	W1216 03:06:23.552288  301603 addons.go:248] addon default-storageclass should already be in state true
	I1216 03:06:23.552343  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.552905  301603 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 03:06:23.552989  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.557315  301603 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:23.558628  301603 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:23.558666  301603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:23.558628  301603 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 03:06:23.558747  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.564984  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 03:06:23.565014  301603 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 03:06:23.565092  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.586572  301603 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:23.586595  301603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:23.586659  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.588714  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.602129  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.612807  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.692248  301603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:23.715538  301603 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:23.715645  301603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:23.722473  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:23.726299  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 03:06:23.726325  301603 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 03:06:23.734200  301603 api_server.go:72] duration metric: took 209.767894ms to wait for apiserver process to appear ...
	I1216 03:06:23.734226  301603 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:23.734246  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:23.743462  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:23.751207  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 03:06:23.751230  301603 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 03:06:23.771921  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 03:06:23.771947  301603 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 03:06:23.797561  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 03:06:23.797671  301603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 03:06:23.821609  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 03:06:23.821676  301603 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 03:06:23.838868  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 03:06:23.838892  301603 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 03:06:23.857286  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 03:06:23.857310  301603 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 03:06:23.877134  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 03:06:23.877158  301603 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 03:06:23.894180  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:23.894201  301603 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 03:06:23.910211  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:20.369853  301866 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-742794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.174919027s)
	I1216 03:06:20.369886  301866 kic.go:203] duration metric: took 4.175098231s to extract preloaded images to volume ...
	W1216 03:06:20.369989  301866 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:20.370036  301866 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:20.370085  301866 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:20.435424  301866 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-742794 --name embed-certs-742794 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-742794 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-742794 --network embed-certs-742794 --ip 192.168.103.2 --volume embed-certs-742794:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:20.870758  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Running}}
	I1216 03:06:20.898056  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:20.923576  301866 cli_runner.go:164] Run: docker exec embed-certs-742794 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:21.017374  301866 oci.go:144] the created container "embed-certs-742794" has a running status.
	I1216 03:06:21.017472  301866 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa...
	I1216 03:06:21.070271  301866 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:21.688706  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:21.714176  301866 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:21.714201  301866 kic_runner.go:114] Args: [docker exec --privileged embed-certs-742794 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:21.776350  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:21.801321  301866 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:21.801417  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:21.824095  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:21.824618  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:21.824635  301866 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:21.976018  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-742794
	
	I1216 03:06:21.976048  301866 ubuntu.go:182] provisioning hostname "embed-certs-742794"
	I1216 03:06:21.976111  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:21.999510  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:21.999761  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:21.999785  301866 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-742794 && echo "embed-certs-742794" | sudo tee /etc/hostname
	I1216 03:06:22.166726  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-742794
	
	I1216 03:06:22.166805  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.190474  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:22.190814  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:22.190863  301866 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-742794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-742794/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-742794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:22.347543  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:22.347569  301866 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:22.347592  301866 ubuntu.go:190] setting up certificates
	I1216 03:06:22.347613  301866 provision.go:84] configureAuth start
	I1216 03:06:22.347673  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:22.377924  301866 provision.go:143] copyHostCerts
	I1216 03:06:22.378064  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:22.378093  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:22.378182  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:22.378308  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:22.378340  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:22.378399  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:22.378496  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:22.378518  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:22.378567  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:22.378660  301866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.embed-certs-742794 san=[127.0.0.1 192.168.103.2 embed-certs-742794 localhost minikube]
	I1216 03:06:22.449181  301866 provision.go:177] copyRemoteCerts
	I1216 03:06:22.449377  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:22.449453  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.477150  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:22.593766  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:22.624253  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:06:22.658288  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:06:22.682985  301866 provision.go:87] duration metric: took 335.351735ms to configureAuth
	I1216 03:06:22.683179  301866 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:22.683400  301866 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.683536  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.708343  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:22.708617  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:22.708644  301866 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:23.042592  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:23.042617  301866 machine.go:97] duration metric: took 1.241273928s to provisionDockerMachine
	I1216 03:06:23.042629  301866 client.go:176] duration metric: took 7.589920989s to LocalClient.Create
	I1216 03:06:23.042654  301866 start.go:167] duration metric: took 7.589999024s to libmachine.API.Create "embed-certs-742794"
	I1216 03:06:23.042664  301866 start.go:293] postStartSetup for "embed-certs-742794" (driver="docker")
	I1216 03:06:23.042678  301866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:23.042747  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:23.042793  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.065944  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.177669  301866 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:23.183838  301866 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:23.183868  301866 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:23.183918  301866 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:23.183977  301866 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:23.184101  301866 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:23.184238  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:23.194602  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:23.226693  301866 start.go:296] duration metric: took 184.012005ms for postStartSetup
	I1216 03:06:23.227134  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:23.258049  301866 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/config.json ...
	I1216 03:06:23.258334  301866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:23.258383  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.292911  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.410879  301866 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:23.418027  301866 start.go:128] duration metric: took 7.968436213s to createHost
	I1216 03:06:23.418097  301866 start.go:83] releasing machines lock for "embed-certs-742794", held for 7.968626597s
	I1216 03:06:23.418206  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:23.445468  301866 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:23.445589  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.445493  301866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:23.445897  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.470699  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.473126  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.673962  301866 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:23.683875  301866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:23.743136  301866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:23.751637  301866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:23.751728  301866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:23.797583  301866 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:23.797601  301866 start.go:496] detecting cgroup driver to use...
	I1216 03:06:23.797634  301866 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:23.797692  301866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:23.820384  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:23.836499  301866 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:23.836562  301866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:23.859761  301866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:23.885604  301866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:24.011746  301866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:24.140250  301866 docker.go:234] disabling docker service ...
	I1216 03:06:24.140318  301866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:24.170922  301866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:24.188528  301866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:24.311433  301866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:24.451002  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:24.472043  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:24.493585  301866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:24.493654  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.508679  301866 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:24.508751  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.524671  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.538760  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.552880  301866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:24.565131  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.579481  301866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.598919  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.611724  301866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:24.622312  301866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:24.634048  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:24.760867  301866 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1216 03:06:21.030090  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:23.528010  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:25.563037  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:25.457307  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:06:25.457333  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:06:25.457347  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:25.497843  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:25.497899  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:25.735013  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:25.740098  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:25.740125  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.234845  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:26.240670  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:26.240698  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.708028  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.964534055s)
	I1216 03:06:26.708749  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.986243655s)
	I1216 03:06:26.734888  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:26.740846  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:26.740878  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.869794  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.9595218s)
	I1216 03:06:26.871403  301603 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-991316 addons enable metrics-server
	
	I1216 03:06:26.873108  301603 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1216 03:06:26.837675  301866 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.076773443s)
	I1216 03:06:26.837899  301866 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:26.837983  301866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:26.843598  301866 start.go:564] Will wait 60s for crictl version
	I1216 03:06:26.843795  301866 ssh_runner.go:195] Run: which crictl
	I1216 03:06:26.850256  301866 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:26.887987  301866 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:26.888122  301866 ssh_runner.go:195] Run: crio --version
	I1216 03:06:26.929930  301866 ssh_runner.go:195] Run: crio --version
	I1216 03:06:27.011178  301866 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:22.536107  305678 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:06:22.536397  305678 start.go:159] libmachine.API.Create for "auto-646016" (driver="docker")
	I1216 03:06:22.536441  305678 client.go:173] LocalClient.Create starting
	I1216 03:06:22.536529  305678 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:06:22.536573  305678 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:22.536594  305678 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:22.536650  305678 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:06:22.536679  305678 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:22.536695  305678 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:22.537250  305678 cli_runner.go:164] Run: docker network inspect auto-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:06:22.561279  305678 cli_runner.go:211] docker network inspect auto-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:06:22.561362  305678 network_create.go:284] running [docker network inspect auto-646016] to gather additional debugging logs...
	I1216 03:06:22.561390  305678 cli_runner.go:164] Run: docker network inspect auto-646016
	W1216 03:06:22.584887  305678 cli_runner.go:211] docker network inspect auto-646016 returned with exit code 1
	I1216 03:06:22.584920  305678 network_create.go:287] error running [docker network inspect auto-646016]: docker network inspect auto-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-646016 not found
	I1216 03:06:22.584997  305678 network_create.go:289] output of [docker network inspect auto-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-646016 not found
	
	** /stderr **
	I1216 03:06:22.585151  305678 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:22.611540  305678 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:06:22.612584  305678 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:06:22.613754  305678 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:06:22.614672  305678 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e5f2a89125ab IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e3:05:bd:28:c9} reservation:<nil>}
	I1216 03:06:22.615574  305678 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5282d64d27b5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:9a:a8:09:ec:bc:45} reservation:<nil>}
	I1216 03:06:22.617217  305678 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2ba0}
	I1216 03:06:22.617241  305678 network_create.go:124] attempt to create docker network auto-646016 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 03:06:22.617278  305678 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-646016 auto-646016
	I1216 03:06:22.694268  305678 network_create.go:108] docker network auto-646016 192.168.94.0/24 created
	I1216 03:06:22.694305  305678 kic.go:121] calculated static IP "192.168.94.2" for the "auto-646016" container
	I1216 03:06:22.694374  305678 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:06:22.720025  305678 cli_runner.go:164] Run: docker volume create auto-646016 --label name.minikube.sigs.k8s.io=auto-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:06:22.746421  305678 oci.go:103] Successfully created a docker volume auto-646016
	I1216 03:06:22.746503  305678 cli_runner.go:164] Run: docker run --rm --name auto-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-646016 --entrypoint /usr/bin/test -v auto-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:06:23.463550  305678 oci.go:107] Successfully prepared a docker volume auto-646016
	I1216 03:06:23.463656  305678 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:23.463668  305678 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:06:23.463743  305678 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:06:26.874256  301603 addons.go:530] duration metric: took 3.349721117s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1216 03:06:27.235021  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:27.239550  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 03:06:27.240707  301603 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 03:06:27.240737  301603 api_server.go:131] duration metric: took 3.506503204s to wait for apiserver health ...
	I1216 03:06:27.240748  301603 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:27.244845  301603 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:27.244893  301603 system_pods.go:61] "coredns-7d764666f9-86ggg" [7d507301-7465-4008-a336-b3ccdf6ac711] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:27.244918  301603 system_pods.go:61] "etcd-newest-cni-991316" [628355b8-6876-4153-97e8-294f83717eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:06:27.244928  301603 system_pods.go:61] "kindnet-7vnx2" [693caa56-221c-4967-b459-24c95a6f228b] Running
	I1216 03:06:27.244940  301603 system_pods.go:61] "kube-apiserver-newest-cni-991316" [80fa29df-b694-4669-a80b-e62f176662a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:06:27.244955  301603 system_pods.go:61] "kube-controller-manager-newest-cni-991316" [6cff15c4-01ea-444f-8e42-d10e73a10abf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:06:27.244965  301603 system_pods.go:61] "kube-proxy-k55dg" [3dcf431e-16a0-4327-b437-ad2b0b7cbea0] Running
	I1216 03:06:27.244973  301603 system_pods.go:61] "kube-scheduler-newest-cni-991316" [17447c80-9e25-41d6-844f-3714404a2404] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:06:27.244984  301603 system_pods.go:61] "storage-provisioner" [b2aa6962-6de7-4fb0-914b-43e726858087] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:27.245010  301603 system_pods.go:74] duration metric: took 4.254347ms to wait for pod list to return data ...
	I1216 03:06:27.245031  301603 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:27.248030  301603 default_sa.go:45] found service account: "default"
	I1216 03:06:27.248053  301603 default_sa.go:55] duration metric: took 3.014741ms for default service account to be created ...
	I1216 03:06:27.248067  301603 kubeadm.go:587] duration metric: took 3.723638897s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:06:27.248094  301603 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:27.251336  301603 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:27.251370  301603 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:27.251382  301603 node_conditions.go:105] duration metric: took 3.283869ms to run NodePressure ...
	I1216 03:06:27.251393  301603 start.go:242] waiting for startup goroutines ...
	I1216 03:06:27.251399  301603 start.go:247] waiting for cluster config update ...
	I1216 03:06:27.251409  301603 start.go:256] writing updated cluster config ...
	I1216 03:06:27.288804  301603 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:27.357781  301603 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:06:27.384151  301603 out.go:179] * Done! kubectl is now configured to use "newest-cni-991316" cluster and "default" namespace by default
	I1216 03:06:27.092261  301866 cli_runner.go:164] Run: docker network inspect embed-certs-742794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:27.117273  301866 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:27.122514  301866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:27.150872  301866 kubeadm.go:884] updating cluster {Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:27.151033  301866 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:27.151094  301866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:27.196339  301866 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:27.196367  301866 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:27.196421  301866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:27.224753  301866 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:27.224771  301866 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:27.224778  301866 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:27.224907  301866 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-742794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:27.224976  301866 ssh_runner.go:195] Run: crio config
	I1216 03:06:27.274734  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:27.274760  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:27.274778  301866 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:27.274799  301866 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-742794 NodeName:embed-certs-742794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:27.275000  301866 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-742794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:27.275073  301866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:27.284577  301866 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:27.284651  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:27.298386  301866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1216 03:06:27.317463  301866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:27.387845  301866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1216 03:06:27.406173  301866 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:27.411438  301866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:27.429319  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:27.559467  301866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:27.590359  301866 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794 for IP: 192.168.103.2
	I1216 03:06:27.590406  301866 certs.go:195] generating shared ca certs ...
	I1216 03:06:27.590426  301866 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.590666  301866 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:27.590717  301866 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:27.590728  301866 certs.go:257] generating profile certs ...
	I1216 03:06:27.590810  301866 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key
	I1216 03:06:27.590849  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt with IP's: []
	I1216 03:06:27.642299  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt ...
	I1216 03:06:27.642327  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt: {Name:mka8440026461283e7781be649a377ed69c0c334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.642489  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key ...
	I1216 03:06:27.642503  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key: {Name:mk2aed8bec3654e799d7107ebcef6ca8e4309070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.642578  301866 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28
	I1216 03:06:27.642594  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 03:06:27.707990  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 ...
	I1216 03:06:27.708025  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28: {Name:mkbf3ef6bfbaa7614efd1e6a67cc5c7d4253e15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.708224  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28 ...
	I1216 03:06:27.708243  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28: {Name:mkbe07cb01c2acb53ae9c637dfbf6702d63a9e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.708359  301866 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt
	I1216 03:06:27.708453  301866 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key
	I1216 03:06:27.708533  301866 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key
	I1216 03:06:27.708551  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt with IP's: []
	I1216 03:06:27.846831  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt ...
	I1216 03:06:27.846860  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt: {Name:mk41dbdda2eae1a3102527058f8d046993235905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.847043  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key ...
	I1216 03:06:27.847062  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key: {Name:mk339b80387c07e7a8ad4a4459a7c29b4085a338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.847290  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:27.847365  301866 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:27.847380  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:27.847418  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:27.847453  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:27.847486  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:27.847551  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:27.848366  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:27.872879  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:27.896282  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:27.922715  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:27.947311  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 03:06:27.973465  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:28.000775  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:28.026223  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:06:28.049838  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:28.079320  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:28.109005  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:28.134354  301866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:28.151514  301866 ssh_runner.go:195] Run: openssl version
	I1216 03:06:28.159443  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.170164  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:28.180498  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.187987  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.188077  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.251319  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:28.265738  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:28.278594  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.296085  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:28.310156  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.316670  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.316750  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.407757  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:28.418774  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:28.438485  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.449298  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:28.459761  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.465454  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.465521  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.517487  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:28.528230  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:28.536473  301866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:28.541410  301866 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:28.541465  301866 kubeadm.go:401] StartCluster: {Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:28.541628  301866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:28.541695  301866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:28.585896  301866 cri.go:89] found id: ""
	I1216 03:06:28.585974  301866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:28.597446  301866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:28.608323  301866 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:28.608388  301866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:28.623512  301866 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:28.623545  301866 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:28.623591  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:28.641019  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:28.641078  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:28.653179  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:28.664616  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:28.664768  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:28.674755  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:28.685378  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:28.685451  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:28.694813  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:28.709080  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:28.709198  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:28.722080  301866 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:28.780698  301866 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:06:28.780767  301866 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:06:28.809967  301866 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:06:28.810073  301866 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:06:28.810296  301866 kubeadm.go:319] OS: Linux
	I1216 03:06:28.810444  301866 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:06:28.810557  301866 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:06:28.810631  301866 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:06:28.810698  301866 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:06:28.810798  301866 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:06:28.810906  301866 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:06:28.811006  301866 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:06:28.811082  301866 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:06:28.894637  301866 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:06:28.894752  301866 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:06:28.894914  301866 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:06:28.903719  301866 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:06:28.908115  301866 out.go:252]   - Generating certificates and keys ...
	I1216 03:06:28.908322  301866 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:06:28.908433  301866 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:06:29.178936  301866 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:06:29.427885  301866 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:06:29.506466  301866 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:06:29.667271  301866 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:06:30.003721  301866 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:06:30.004014  301866 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-742794 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	
	
	==> CRI-O <==
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.544801478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.552679214Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dbaef898-6e40-49a4-bd2c-b6f6faf31ce1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.553165594Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5a90c108-90dd-4846-a04b-020611a634bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.557244308Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.55856057Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.56152136Z" level=info msg="Ran pod sandbox 5b5a814b936de6ec27efefe2519d62550fcc818145772b76e94ec8f6834bb770 with infra container: kube-system/kube-proxy-k55dg/POD" id=dbaef898-6e40-49a4-bd2c-b6f6faf31ce1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.562590403Z" level=info msg="Ran pod sandbox f1834151f52bc09f60935191c9f8eed65bba13df68b2e2d6a4a5d3511634ab10 with infra container: kube-system/kindnet-7vnx2/POD" id=5a90c108-90dd-4846-a04b-020611a634bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.565977669Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c7ce3edd-d0e9-4ba5-bb29-946bf80c7158 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.567104215Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=565c2a85-8c3d-4362-8ce8-f9085a969c4d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.567339398Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2938b03d-a5cf-488b-ad74-870bb1005dab name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.569108646Z" level=info msg="Creating container: kube-system/kindnet-7vnx2/kindnet-cni" id=58844da2-024c-4898-a455-d8fb5d91d5bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.569216924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.574481203Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ab53670d-5916-4cfe-ab5c-fdea17f1748e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.586480621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.58725182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.587432478Z" level=info msg="Creating container: kube-system/kube-proxy-k55dg/kube-proxy" id=f26e7032-d020-4e80-b914-7a5e45e9d182 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.58760844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.620673626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.62138726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.659585409Z" level=info msg="Created container a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303: kube-system/kube-proxy-k55dg/kube-proxy" id=f26e7032-d020-4e80-b914-7a5e45e9d182 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.660020081Z" level=info msg="Created container 1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c: kube-system/kindnet-7vnx2/kindnet-cni" id=58844da2-024c-4898-a455-d8fb5d91d5bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.662583391Z" level=info msg="Starting container: 1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c" id=07971559-effe-4648-b8d0-3abc529d7cd3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.664104111Z" level=info msg="Starting container: a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303" id=09ff8509-d076-4d92-a06d-22bc4daac9f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.670115566Z" level=info msg="Started container" PID=1042 containerID=a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303 description=kube-system/kube-proxy-k55dg/kube-proxy id=09ff8509-d076-4d92-a06d-22bc4daac9f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b5a814b936de6ec27efefe2519d62550fcc818145772b76e94ec8f6834bb770
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.671057493Z" level=info msg="Started container" PID=1037 containerID=1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c description=kube-system/kindnet-7vnx2/kindnet-cni id=07971559-effe-4648-b8d0-3abc529d7cd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1834151f52bc09f60935191c9f8eed65bba13df68b2e2d6a4a5d3511634ab10
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a30488f17adf4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   5b5a814b936de       kube-proxy-k55dg                            kube-system
	1627d3a312598       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   f1834151f52bc       kindnet-7vnx2                               kube-system
	9e8bbaa71c603       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   3156b49ba7aec       kube-scheduler-newest-cni-991316            kube-system
	0c28c0cfc004d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   94e6271e4fb63       kube-controller-manager-newest-cni-991316   kube-system
	8b18d0a9af326       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   c53478c467e71       etcd-newest-cni-991316                      kube-system
	d5dff5ae5810d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   f04b9706e49e0       kube-apiserver-newest-cni-991316            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-991316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-991316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=newest-cni-991316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-991316
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-991316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                58335f55-1f55-4122-b10c-c1f511a1797b
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-991316                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-7vnx2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-991316             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-991316    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-k55dg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-991316             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-991316 event: Registered Node newest-cni-991316 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-991316 event: Registered Node newest-cni-991316 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f] <==
	{"level":"warn","ts":"2025-12-16T03:06:26.292945Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:25.974869Z","time spent":"318.066833ms","remote":"127.0.0.1:33696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":5080,"request content":"key:\"/registry/pods/kube-system/kube-proxy-k55dg\" limit:1 "}
	{"level":"info","ts":"2025-12-16T03:06:26.292964Z","caller":"traceutil/trace.go:172","msg":"trace[1599206036] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-7vnx2; range_end:; response_count:1; response_revision:436; }","duration":"158.358742ms","start":"2025-12-16T03:06:26.134597Z","end":"2025-12-16T03:06:26.292955Z","steps":["trace[1599206036] 'agreement among raft nodes before linearized reading'  (duration: 158.240227ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.292967Z","caller":"traceutil/trace.go:172","msg":"trace[1064310780] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"316.465699ms","start":"2025-12-16T03:06:25.976484Z","end":"2025-12-16T03:06:26.292950Z","steps":["trace[1064310780] 'process raft request'  (duration: 259.095015ms)","trace[1064310780] 'compare'  (duration: 57.126906ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.293392Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:25.976466Z","time spent":"316.885056ms","remote":"127.0.0.1:33530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":686,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/newest-cni-991316.1881933483dc9764\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/newest-cni-991316.1881933483dc9764\" value_size:609 lease:6414985302981273270 >> failure:<>"}
	{"level":"warn","ts":"2025-12-16T03:06:26.293119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.966978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:5009"}
	{"level":"info","ts":"2025-12-16T03:06:26.293544Z","caller":"traceutil/trace.go:172","msg":"trace[1405788312] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-991316; range_end:; response_count:1; response_revision:436; }","duration":"153.390134ms","start":"2025-12-16T03:06:26.140143Z","end":"2025-12-16T03:06:26.293533Z","steps":["trace[1405788312] 'agreement among raft nodes before linearized reading'  (duration: 152.901546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.293123Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.669922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-12-16T03:06:26.293680Z","caller":"traceutil/trace.go:172","msg":"trace[722680565] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:436; }","duration":"141.231239ms","start":"2025-12-16T03:06:26.152438Z","end":"2025-12-16T03:06:26.293669Z","steps":["trace[722680565] 'agreement among raft nodes before linearized reading'  (duration: 140.440135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.293200Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.276646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:7914"}
	{"level":"info","ts":"2025-12-16T03:06:26.294871Z","caller":"traceutil/trace.go:172","msg":"trace[428045109] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-991316; range_end:; response_count:1; response_revision:436; }","duration":"158.937235ms","start":"2025-12-16T03:06:26.135917Z","end":"2025-12-16T03:06:26.294854Z","steps":["trace[428045109] 'agreement among raft nodes before linearized reading'  (duration: 157.230598ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.431862Z","caller":"traceutil/trace.go:172","msg":"trace[490841297] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"132.960907ms","start":"2025-12-16T03:06:26.298877Z","end":"2025-12-16T03:06:26.431838Z","steps":["trace[490841297] 'process raft request'  (duration: 124.341666ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.432157Z","caller":"traceutil/trace.go:172","msg":"trace[1288058902] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"127.103244ms","start":"2025-12-16T03:06:26.305036Z","end":"2025-12-16T03:06:26.432139Z","steps":["trace[1288058902] 'process raft request'  (duration: 126.754351ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.432125Z","caller":"traceutil/trace.go:172","msg":"trace[1721760253] transaction","detail":"{read_only:false; number_of_response:0; response_revision:438; }","duration":"130.206464ms","start":"2025-12-16T03:06:26.301904Z","end":"2025-12-16T03:06:26.432110Z","steps":["trace[1721760253] 'process raft request'  (duration: 129.845531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.543369Z","caller":"traceutil/trace.go:172","msg":"trace[1952504786] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:462; }","duration":"104.25026ms","start":"2025-12-16T03:06:26.439096Z","end":"2025-12-16T03:06:26.543346Z","steps":["trace[1952504786] 'read index received'  (duration: 104.241917ms)","trace[1952504786] 'applied index is now lower than readState.Index'  (duration: 7.216µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.553670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.555609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-991316.1881933483dc6375\" limit:1 ","response":"range_response_count:1 size:705"}
	{"level":"info","ts":"2025-12-16T03:06:26.554766Z","caller":"traceutil/trace.go:172","msg":"trace[1002969381] range","detail":"{range_begin:/registry/events/default/newest-cni-991316.1881933483dc6375; range_end:; response_count:1; response_revision:439; }","duration":"115.661248ms","start":"2025-12-16T03:06:26.439091Z","end":"2025-12-16T03:06:26.554752Z","steps":["trace[1002969381] 'agreement among raft nodes before linearized reading'  (duration: 104.360259ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.554025Z","caller":"traceutil/trace.go:172","msg":"trace[1102162955] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"115.799113ms","start":"2025-12-16T03:06:26.438210Z","end":"2025-12-16T03:06:26.554009Z","steps":["trace[1102162955] 'process raft request'  (duration: 105.281519ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.554331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.669723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-12-16T03:06:26.555100Z","caller":"traceutil/trace.go:172","msg":"trace[1119378501] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:440; }","duration":"105.438917ms","start":"2025-12-16T03:06:26.449646Z","end":"2025-12-16T03:06:26.555085Z","steps":["trace[1119378501] 'agreement among raft nodes before linearized reading'  (duration: 104.622634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.554366Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.804372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-16T03:06:26.554419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.412099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:7448"}
	{"level":"warn","ts":"2025-12-16T03:06:26.554472Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.892234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:5976"}
	{"level":"info","ts":"2025-12-16T03:06:26.556108Z","caller":"traceutil/trace.go:172","msg":"trace[451007478] range","detail":"{range_begin:/registry/pods/kube-system/etcd-newest-cni-991316; range_end:; response_count:1; response_revision:440; }","duration":"116.519553ms","start":"2025-12-16T03:06:26.439576Z","end":"2025-12-16T03:06:26.556095Z","steps":["trace[451007478] 'agreement among raft nodes before linearized reading'  (duration: 114.85819ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.556440Z","caller":"traceutil/trace.go:172","msg":"trace[1720046198] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:440; }","duration":"106.872228ms","start":"2025-12-16T03:06:26.449558Z","end":"2025-12-16T03:06:26.556430Z","steps":["trace[1720046198] 'agreement among raft nodes before linearized reading'  (duration: 104.79226ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.557748Z","caller":"traceutil/trace.go:172","msg":"trace[1772874145] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-991316; range_end:; response_count:1; response_revision:440; }","duration":"117.730576ms","start":"2025-12-16T03:06:26.440002Z","end":"2025-12-16T03:06:26.557732Z","steps":["trace[1772874145] 'agreement among raft nodes before linearized reading'  (duration: 114.373729ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:06:31 up 49 min,  0 user,  load average: 5.28, 3.35, 2.10
	Linux newest-cni-991316 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c] <==
	I1216 03:06:26.892131       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:26.892636       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 03:06:26.892783       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:26.892801       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:26.892843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:27.189149       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:27.189200       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:27.189213       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:27.189372       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:27.589351       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:27.590083       1 metrics.go:72] Registering metrics
	I1216 03:06:27.590187       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef] <==
	I1216 03:06:25.549016       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:06:25.549047       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:06:25.549066       1 aggregator.go:187] initial CRD sync complete...
	I1216 03:06:25.549080       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:06:25.549079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:06:25.549090       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:25.549086       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:06:25.549142       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:06:25.549300       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:06:25.559327       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:06:25.561665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:06:25.574051       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:25.574080       1 policy_source.go:248] refreshing policies
	I1216 03:06:25.581253       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:06:25.974454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:06:26.297792       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:06:26.622837       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:06:26.695728       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:06:26.739581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:06:26.755975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:06:26.843867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.77.8"}
	I1216 03:06:26.860013       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.86.21"}
	I1216 03:06:29.104688       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:06:29.154335       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:06:29.203860       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc] <==
	I1216 03:06:28.717023       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716883       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716899       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717032       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.714668       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717056       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717040       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716972       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.719443       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:06:28.720238       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-991316"
	I1216 03:06:28.720348       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 03:06:28.720274       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.721965       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.722014       1 range_allocator.go:177] "Sending events to api server"
	I1216 03:06:28.722049       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1216 03:06:28.722062       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:28.722068       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.722504       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.725363       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.725883       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.734783       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:28.811037       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.811061       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:06:28.811070       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:06:28.835927       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303] <==
	I1216 03:06:26.748843       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:26.827248       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:26.927665       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:26.927732       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 03:06:26.927896       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:26.964561       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:26.964676       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:06:26.977772       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:26.978159       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:06:26.978403       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:26.980658       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:26.980677       1 config.go:200] "Starting service config controller"
	I1216 03:06:26.980691       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:26.980704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:26.981244       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:26.982151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:26.981390       1 config.go:309] "Starting node config controller"
	I1216 03:06:26.982256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:26.982288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:27.081616       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:06:27.081628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:06:27.082965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3] <==
	I1216 03:06:23.484928       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:06:25.488444       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:06:25.488579       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1216 03:06:25.488595       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:06:25.488604       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:06:25.507170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 03:06:25.507201       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:25.510032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:25.510066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:25.510195       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:06:25.510263       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:06:25.610641       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.785910     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-991316" containerName="kube-apiserver"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.786935     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.787050     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-991316" containerName="kube-controller-manager"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.787208     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-991316" containerName="etcd"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.812623     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-991316\" already exists" pod="kube-system/kube-scheduler-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.812705     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827843     663 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827941     663 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827984     663 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.831587     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-991316\" already exists" pod="kube-system/etcd-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.831765     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.833048     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.839739     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868246     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-lib-modules\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868296     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-lib-modules\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868339     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-cni-cfg\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868369     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-xtables-lock\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868404     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-xtables-lock\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: E1216 03:06:26.298455     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-991316\" already exists" pod="kube-system/kube-apiserver-newest-cni-991316"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: I1216 03:06:26.298510     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-991316"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: E1216 03:06:26.566449     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-991316\" already exists" pod="kube-system/kube-controller-manager-newest-cni-991316"
	Dec 16 03:06:28 newest-cni-991316 kubelet[663]: I1216 03:06:28.827253     663 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-991316 -n newest-cni-991316: exit status 2 (342.502648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-991316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx: exit status 1 (63.018676ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-86ggg" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-r7zbq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kdkrx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-991316
helpers_test.go:244: (dbg) docker inspect newest-cni-991316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	        "Created": "2025-12-16T03:05:44.429433316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301948,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:06:15.174095749Z",
	            "FinishedAt": "2025-12-16T03:06:13.949912546Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/hosts",
	        "LogPath": "/var/lib/docker/containers/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d/4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d-json.log",
	        "Name": "/newest-cni-991316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-991316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-991316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4f4fbbe065795371c6c32bbea9f5f42159338a3a37bff8ddaea0af4a12e7c86d",
	                "LowerDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1deda0f71b9eeea12aff455d028237aa863355674e0430b723a9f968ff770cd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-991316",
	                "Source": "/var/lib/docker/volumes/newest-cni-991316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-991316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-991316",
	                "name.minikube.sigs.k8s.io": "newest-cni-991316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe2cb9f3bd270400189b52407d916478b06fc0f50a7b57ad136e1d0c7d2afb30",
	            "SandboxKey": "/var/run/docker/netns/fe2cb9f3bd27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-991316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5f2a89125abbce7b9991af7d91b2faefd2ac42de4f13e650434f1e7fd46fcce",
	                    "EndpointID": "a50ddda68fefc79da98b9964075449fe0cbbdfc36745aa8d9c731ec83a3dc12f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "92:5f:6a:73:72:4c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-991316",
	                        "4f4fbbe06579"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316: exit status 2 (348.466142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-991316 logs -n 25: (1.028589322s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ delete  │ -p kubernetes-upgrade-058433                                                                                                                                                                                                                         │ kubernetes-upgrade-058433    │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:05 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-079165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-079165 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:05 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-991316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ image   │ newest-cni-991316 image list --format=json                                                                                                                                                                                                           │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p newest-cni-991316 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:22.284329  305678 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:22.284617  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:22.284631  305678 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:22.284638  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:22.284954  305678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:22.285678  305678 out.go:368] Setting JSON to false
	I1216 03:06:22.287282  305678 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2934,"bootTime":1765851448,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:22.287373  305678 start.go:143] virtualization: kvm guest
	I1216 03:06:22.290022  305678 out.go:179] * [auto-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:22.291458  305678 notify.go:221] Checking for updates...
	I1216 03:06:22.292228  305678 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:22.293749  305678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:22.295150  305678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:22.296681  305678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:22.298011  305678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:22.299583  305678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:22.302223  305678 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.302393  305678 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.302539  305678 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:22.302663  305678 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:22.336116  305678 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:22.336268  305678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:22.411201  305678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-16 03:06:22.398711684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:22.411340  305678 docker.go:319] overlay module found
	I1216 03:06:22.414008  305678 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:22.415040  305678 start.go:309] selected driver: docker
	I1216 03:06:22.415058  305678 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:22.415073  305678 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:22.415884  305678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:22.492970  305678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-16 03:06:22.480930076 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:22.493168  305678 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:22.493459  305678 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:22.495083  305678 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:22.496325  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:22.496400  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:22.496415  305678 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:22.496494  305678 start.go:353] cluster config:
	{Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1216 03:06:22.501228  305678 out.go:179] * Starting "auto-646016" primary control-plane node in "auto-646016" cluster
	I1216 03:06:22.502348  305678 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:22.503563  305678 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:22.506039  305678 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:22.506075  305678 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:22.506084  305678 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:22.506150  305678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:22.506209  305678 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:22.506224  305678 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:22.506376  305678 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/config.json ...
	I1216 03:06:22.506409  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/config.json: {Name:mk6894176fd87eb172eff7a30a02ce744943e5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:22.532894  305678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:22.532917  305678 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:22.532932  305678 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:22.532965  305678 start.go:360] acquireMachinesLock for auto-646016: {Name:mk6f07284451993c7ba7d88753d28ad1c708a70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:22.533062  305678 start.go:364] duration metric: took 72.426µs to acquireMachinesLock for "auto-646016"
	I1216 03:06:22.533087  305678 start.go:93] Provisioning new machine with config: &{Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:22.533197  305678 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:22.128033  301603 kubeadm.go:884] updating cluster {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:22.128196  301603 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 03:06:22.128279  301603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:22.172317  301603 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:22.172344  301603 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:22.172399  301603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:22.207442  301603 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:22.207467  301603 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:22.207477  301603 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 03:06:22.207594  301603 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-991316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:22.207680  301603 ssh_runner.go:195] Run: crio config
	I1216 03:06:22.277901  301603 cni.go:84] Creating CNI manager for ""
	I1216 03:06:22.277928  301603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:22.277945  301603 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 03:06:22.277974  301603 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-991316 NodeName:newest-cni-991316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:22.278189  301603 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-991316"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:22.278269  301603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:06:22.290252  301603 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:22.290338  301603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:22.301484  301603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 03:06:22.322454  301603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:06:22.339874  301603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 03:06:22.357213  301603 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:22.366199  301603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:22.381658  301603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:22.507510  301603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:22.531875  301603 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316 for IP: 192.168.76.2
	I1216 03:06:22.531909  301603 certs.go:195] generating shared ca certs ...
	I1216 03:06:22.531933  301603 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:22.532075  301603 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:22.532345  301603 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:22.532407  301603 certs.go:257] generating profile certs ...
	I1216 03:06:22.532582  301603 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/client.key
	I1216 03:06:22.533119  301603 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key.4c5ce275
	I1216 03:06:22.533264  301603 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key
	I1216 03:06:22.533447  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:22.533495  301603 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:22.533510  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:22.533552  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:22.533589  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:22.533623  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:22.533692  301603 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:22.534586  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:22.560908  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:22.586041  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:22.613702  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:22.647956  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:06:22.681696  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:22.706692  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:22.731333  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/newest-cni-991316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:06:22.754198  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:22.778251  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:22.800139  301603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:22.819225  301603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:22.841853  301603 ssh_runner.go:195] Run: openssl version
	I1216 03:06:22.850174  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.859571  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:22.867979  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.872132  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.872200  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:22.907564  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:22.916207  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.924618  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:22.932990  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.936876  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.936925  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:22.977161  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:22.985496  301603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:22.994688  301603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:23.005679  301603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.009859  301603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.009947  301603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:23.056975  301603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:23.066308  301603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:23.071232  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:06:23.118076  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:06:23.172995  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:06:23.237539  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:06:23.300442  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:06:23.359472  301603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:06:23.413346  301603 kubeadm.go:401] StartCluster: {Name:newest-cni-991316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-991316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:23.413470  301603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:23.413536  301603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:23.468024  301603 cri.go:89] found id: "9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3"
	I1216 03:06:23.468104  301603 cri.go:89] found id: "0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc"
	I1216 03:06:23.468123  301603 cri.go:89] found id: "8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f"
	I1216 03:06:23.468139  301603 cri.go:89] found id: "d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef"
	I1216 03:06:23.468170  301603 cri.go:89] found id: ""
	I1216 03:06:23.468257  301603 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 03:06:23.485622  301603 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:06:23Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:06:23.485700  301603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:23.497091  301603 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 03:06:23.497112  301603 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 03:06:23.497187  301603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:06:23.506664  301603 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:06:23.507425  301603 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-991316" does not appear in /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:23.507813  301603 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-5058/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-991316" cluster setting kubeconfig missing "newest-cni-991316" context setting]
	I1216 03:06:23.509381  301603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.511517  301603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:06:23.522389  301603 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 03:06:23.522490  301603 kubeadm.go:602] duration metric: took 25.367584ms to restartPrimaryControlPlane
	I1216 03:06:23.522539  301603 kubeadm.go:403] duration metric: took 109.200619ms to StartCluster
	I1216 03:06:23.522578  301603 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.522653  301603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:23.523890  301603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:23.524393  301603 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:23.524550  301603 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:23.524692  301603 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-991316"
	I1216 03:06:23.524727  301603 config.go:182] Loaded profile config "newest-cni-991316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:06:23.524723  301603 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-991316"
	W1216 03:06:23.524879  301603 addons.go:248] addon storage-provisioner should already be in state true
	I1216 03:06:23.524926  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.524740  301603 addons.go:70] Setting dashboard=true in profile "newest-cni-991316"
	I1216 03:06:23.524970  301603 addons.go:239] Setting addon dashboard=true in "newest-cni-991316"
	W1216 03:06:23.524980  301603 addons.go:248] addon dashboard should already be in state true
	I1216 03:06:23.525006  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.524748  301603 addons.go:70] Setting default-storageclass=true in profile "newest-cni-991316"
	I1216 03:06:23.525079  301603 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-991316"
	I1216 03:06:23.525371  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.525403  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.525447  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.527279  301603 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:23.529457  301603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:23.552265  301603 addons.go:239] Setting addon default-storageclass=true in "newest-cni-991316"
	W1216 03:06:23.552288  301603 addons.go:248] addon default-storageclass should already be in state true
	I1216 03:06:23.552343  301603 host.go:66] Checking if "newest-cni-991316" exists ...
	I1216 03:06:23.552905  301603 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 03:06:23.552989  301603 cli_runner.go:164] Run: docker container inspect newest-cni-991316 --format={{.State.Status}}
	I1216 03:06:23.557315  301603 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:23.558628  301603 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:23.558666  301603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:23.558628  301603 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 03:06:23.558747  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.564984  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 03:06:23.565014  301603 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 03:06:23.565092  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.586572  301603 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:23.586595  301603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:23.586659  301603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-991316
	I1216 03:06:23.588714  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.602129  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.612807  301603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/newest-cni-991316/id_rsa Username:docker}
	I1216 03:06:23.692248  301603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:23.715538  301603 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:23.715645  301603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:23.722473  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:23.726299  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 03:06:23.726325  301603 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 03:06:23.734200  301603 api_server.go:72] duration metric: took 209.767894ms to wait for apiserver process to appear ...
	I1216 03:06:23.734226  301603 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:23.734246  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:23.743462  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:23.751207  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 03:06:23.751230  301603 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 03:06:23.771921  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 03:06:23.771947  301603 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 03:06:23.797561  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 03:06:23.797671  301603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 03:06:23.821609  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 03:06:23.821676  301603 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 03:06:23.838868  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 03:06:23.838892  301603 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 03:06:23.857286  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 03:06:23.857310  301603 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 03:06:23.877134  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 03:06:23.877158  301603 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 03:06:23.894180  301603 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:23.894201  301603 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 03:06:23.910211  301603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 03:06:20.369853  301866 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-742794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.174919027s)
	I1216 03:06:20.369886  301866 kic.go:203] duration metric: took 4.175098231s to extract preloaded images to volume ...
	W1216 03:06:20.369989  301866 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:20.370036  301866 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:20.370085  301866 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:20.435424  301866 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-742794 --name embed-certs-742794 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-742794 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-742794 --network embed-certs-742794 --ip 192.168.103.2 --volume embed-certs-742794:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:20.870758  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Running}}
	I1216 03:06:20.898056  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:20.923576  301866 cli_runner.go:164] Run: docker exec embed-certs-742794 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:21.017374  301866 oci.go:144] the created container "embed-certs-742794" has a running status.
	I1216 03:06:21.017472  301866 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa...
	I1216 03:06:21.070271  301866 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:21.688706  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:21.714176  301866 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:21.714201  301866 kic_runner.go:114] Args: [docker exec --privileged embed-certs-742794 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:21.776350  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:21.801321  301866 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:21.801417  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:21.824095  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:21.824618  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:21.824635  301866 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:21.976018  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-742794
	
	I1216 03:06:21.976048  301866 ubuntu.go:182] provisioning hostname "embed-certs-742794"
	I1216 03:06:21.976111  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:21.999510  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:21.999761  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:21.999785  301866 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-742794 && echo "embed-certs-742794" | sudo tee /etc/hostname
	I1216 03:06:22.166726  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-742794
	
	I1216 03:06:22.166805  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.190474  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:22.190814  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:22.190863  301866 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-742794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-742794/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-742794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:22.347543  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:22.347569  301866 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:22.347592  301866 ubuntu.go:190] setting up certificates
	I1216 03:06:22.347613  301866 provision.go:84] configureAuth start
	I1216 03:06:22.347673  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:22.377924  301866 provision.go:143] copyHostCerts
	I1216 03:06:22.378064  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:22.378093  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:22.378182  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:22.378308  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:22.378340  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:22.378399  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:22.378496  301866 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:22.378518  301866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:22.378567  301866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:22.378660  301866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.embed-certs-742794 san=[127.0.0.1 192.168.103.2 embed-certs-742794 localhost minikube]
	I1216 03:06:22.449181  301866 provision.go:177] copyRemoteCerts
	I1216 03:06:22.449377  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:22.449453  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.477150  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:22.593766  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:22.624253  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:06:22.658288  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:06:22.682985  301866 provision.go:87] duration metric: took 335.351735ms to configureAuth
	I1216 03:06:22.683179  301866 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:22.683400  301866 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:22.683536  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:22.708343  301866 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:22.708617  301866 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1216 03:06:22.708644  301866 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:23.042592  301866 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:23.042617  301866 machine.go:97] duration metric: took 1.241273928s to provisionDockerMachine
	I1216 03:06:23.042629  301866 client.go:176] duration metric: took 7.589920989s to LocalClient.Create
	I1216 03:06:23.042654  301866 start.go:167] duration metric: took 7.589999024s to libmachine.API.Create "embed-certs-742794"
	I1216 03:06:23.042664  301866 start.go:293] postStartSetup for "embed-certs-742794" (driver="docker")
	I1216 03:06:23.042678  301866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:23.042747  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:23.042793  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.065944  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.177669  301866 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:23.183838  301866 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:23.183868  301866 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:23.183918  301866 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:23.183977  301866 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:23.184101  301866 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:23.184238  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:23.194602  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:23.226693  301866 start.go:296] duration metric: took 184.012005ms for postStartSetup
	I1216 03:06:23.227134  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:23.258049  301866 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/config.json ...
	I1216 03:06:23.258334  301866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:23.258383  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.292911  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.410879  301866 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:23.418027  301866 start.go:128] duration metric: took 7.968436213s to createHost
	I1216 03:06:23.418097  301866 start.go:83] releasing machines lock for "embed-certs-742794", held for 7.968626597s
	I1216 03:06:23.418206  301866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-742794
	I1216 03:06:23.445468  301866 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:23.445589  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.445493  301866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:23.445897  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:23.470699  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.473126  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:23.673962  301866 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:23.683875  301866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:23.743136  301866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:23.751637  301866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:23.751728  301866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:23.797583  301866 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:23.797601  301866 start.go:496] detecting cgroup driver to use...
	I1216 03:06:23.797634  301866 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:23.797692  301866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:23.820384  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:23.836499  301866 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:23.836562  301866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:23.859761  301866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:23.885604  301866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:24.011746  301866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:24.140250  301866 docker.go:234] disabling docker service ...
	I1216 03:06:24.140318  301866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:24.170922  301866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:24.188528  301866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:24.311433  301866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:24.451002  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:24.472043  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:24.493585  301866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:24.493654  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.508679  301866 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:24.508751  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.524671  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.538760  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.552880  301866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:24.565131  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.579481  301866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.598919  301866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:24.611724  301866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:24.622312  301866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:24.634048  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:24.760867  301866 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1216 03:06:21.030090  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:23.528010  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:25.563037  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:25.457307  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:06:25.457333  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:06:25.457347  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:25.497843  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:25.497899  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:25.735013  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:25.740098  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:25.740125  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.234845  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:26.240670  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:26.240698  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.708028  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.964534055s)
	I1216 03:06:26.708749  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.986243655s)
	I1216 03:06:26.734888  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:26.740846  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:06:26.740878  301603 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:06:26.869794  301603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.9595218s)
	I1216 03:06:26.871403  301603 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-991316 addons enable metrics-server
	
	I1216 03:06:26.873108  301603 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1216 03:06:26.837675  301866 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.076773443s)
	I1216 03:06:26.837899  301866 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:26.837983  301866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:26.843598  301866 start.go:564] Will wait 60s for crictl version
	I1216 03:06:26.843795  301866 ssh_runner.go:195] Run: which crictl
	I1216 03:06:26.850256  301866 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:26.887987  301866 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:26.888122  301866 ssh_runner.go:195] Run: crio --version
	I1216 03:06:26.929930  301866 ssh_runner.go:195] Run: crio --version
	I1216 03:06:27.011178  301866 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:22.536107  305678 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:06:22.536397  305678 start.go:159] libmachine.API.Create for "auto-646016" (driver="docker")
	I1216 03:06:22.536441  305678 client.go:173] LocalClient.Create starting
	I1216 03:06:22.536529  305678 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:06:22.536573  305678 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:22.536594  305678 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:22.536650  305678 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:06:22.536679  305678 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:22.536695  305678 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:22.537250  305678 cli_runner.go:164] Run: docker network inspect auto-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:06:22.561279  305678 cli_runner.go:211] docker network inspect auto-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:06:22.561362  305678 network_create.go:284] running [docker network inspect auto-646016] to gather additional debugging logs...
	I1216 03:06:22.561390  305678 cli_runner.go:164] Run: docker network inspect auto-646016
	W1216 03:06:22.584887  305678 cli_runner.go:211] docker network inspect auto-646016 returned with exit code 1
	I1216 03:06:22.584920  305678 network_create.go:287] error running [docker network inspect auto-646016]: docker network inspect auto-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-646016 not found
	I1216 03:06:22.584997  305678 network_create.go:289] output of [docker network inspect auto-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-646016 not found
	
	** /stderr **
	I1216 03:06:22.585151  305678 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:22.611540  305678 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:06:22.612584  305678 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:06:22.613754  305678 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:06:22.614672  305678 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e5f2a89125ab IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e3:05:bd:28:c9} reservation:<nil>}
	I1216 03:06:22.615574  305678 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5282d64d27b5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:9a:a8:09:ec:bc:45} reservation:<nil>}
	I1216 03:06:22.617217  305678 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2ba0}
	I1216 03:06:22.617241  305678 network_create.go:124] attempt to create docker network auto-646016 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 03:06:22.617278  305678 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-646016 auto-646016
	I1216 03:06:22.694268  305678 network_create.go:108] docker network auto-646016 192.168.94.0/24 created
	I1216 03:06:22.694305  305678 kic.go:121] calculated static IP "192.168.94.2" for the "auto-646016" container
	I1216 03:06:22.694374  305678 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:06:22.720025  305678 cli_runner.go:164] Run: docker volume create auto-646016 --label name.minikube.sigs.k8s.io=auto-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:06:22.746421  305678 oci.go:103] Successfully created a docker volume auto-646016
	I1216 03:06:22.746503  305678 cli_runner.go:164] Run: docker run --rm --name auto-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-646016 --entrypoint /usr/bin/test -v auto-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:06:23.463550  305678 oci.go:107] Successfully prepared a docker volume auto-646016
	I1216 03:06:23.463656  305678 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:23.463668  305678 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:06:23.463743  305678 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:06:26.874256  301603 addons.go:530] duration metric: took 3.349721117s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1216 03:06:27.235021  301603 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:06:27.239550  301603 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 03:06:27.240707  301603 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 03:06:27.240737  301603 api_server.go:131] duration metric: took 3.506503204s to wait for apiserver health ...
	I1216 03:06:27.240748  301603 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:27.244845  301603 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:27.244893  301603 system_pods.go:61] "coredns-7d764666f9-86ggg" [7d507301-7465-4008-a336-b3ccdf6ac711] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:27.244918  301603 system_pods.go:61] "etcd-newest-cni-991316" [628355b8-6876-4153-97e8-294f83717eaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:06:27.244928  301603 system_pods.go:61] "kindnet-7vnx2" [693caa56-221c-4967-b459-24c95a6f228b] Running
	I1216 03:06:27.244940  301603 system_pods.go:61] "kube-apiserver-newest-cni-991316" [80fa29df-b694-4669-a80b-e62f176662a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:06:27.244955  301603 system_pods.go:61] "kube-controller-manager-newest-cni-991316" [6cff15c4-01ea-444f-8e42-d10e73a10abf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:06:27.244965  301603 system_pods.go:61] "kube-proxy-k55dg" [3dcf431e-16a0-4327-b437-ad2b0b7cbea0] Running
	I1216 03:06:27.244973  301603 system_pods.go:61] "kube-scheduler-newest-cni-991316" [17447c80-9e25-41d6-844f-3714404a2404] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:06:27.244984  301603 system_pods.go:61] "storage-provisioner" [b2aa6962-6de7-4fb0-914b-43e726858087] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 03:06:27.245010  301603 system_pods.go:74] duration metric: took 4.254347ms to wait for pod list to return data ...
	I1216 03:06:27.245031  301603 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:27.248030  301603 default_sa.go:45] found service account: "default"
	I1216 03:06:27.248053  301603 default_sa.go:55] duration metric: took 3.014741ms for default service account to be created ...
	I1216 03:06:27.248067  301603 kubeadm.go:587] duration metric: took 3.723638897s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:06:27.248094  301603 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:27.251336  301603 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:27.251370  301603 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:27.251382  301603 node_conditions.go:105] duration metric: took 3.283869ms to run NodePressure ...
	I1216 03:06:27.251393  301603 start.go:242] waiting for startup goroutines ...
	I1216 03:06:27.251399  301603 start.go:247] waiting for cluster config update ...
	I1216 03:06:27.251409  301603 start.go:256] writing updated cluster config ...
	I1216 03:06:27.288804  301603 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:27.357781  301603 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 03:06:27.384151  301603 out.go:179] * Done! kubectl is now configured to use "newest-cni-991316" cluster and "default" namespace by default
	I1216 03:06:27.092261  301866 cli_runner.go:164] Run: docker network inspect embed-certs-742794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:27.117273  301866 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:27.122514  301866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:27.150872  301866 kubeadm.go:884] updating cluster {Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:27.151033  301866 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:27.151094  301866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:27.196339  301866 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:27.196367  301866 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:27.196421  301866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:27.224753  301866 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:27.224771  301866 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:27.224778  301866 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:27.224907  301866 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-742794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:27.224976  301866 ssh_runner.go:195] Run: crio config
	I1216 03:06:27.274734  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:27.274760  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:27.274778  301866 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:27.274799  301866 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-742794 NodeName:embed-certs-742794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:27.275000  301866 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-742794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:27.275073  301866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:27.284577  301866 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:27.284651  301866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:27.298386  301866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1216 03:06:27.317463  301866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:27.387845  301866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1216 03:06:27.406173  301866 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:27.411438  301866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:27.429319  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:27.559467  301866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:27.590359  301866 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794 for IP: 192.168.103.2
	I1216 03:06:27.590406  301866 certs.go:195] generating shared ca certs ...
	I1216 03:06:27.590426  301866 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.590666  301866 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:27.590717  301866 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:27.590728  301866 certs.go:257] generating profile certs ...
	I1216 03:06:27.590810  301866 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key
	I1216 03:06:27.590849  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt with IP's: []
	I1216 03:06:27.642299  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt ...
	I1216 03:06:27.642327  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.crt: {Name:mka8440026461283e7781be649a377ed69c0c334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.642489  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key ...
	I1216 03:06:27.642503  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/client.key: {Name:mk2aed8bec3654e799d7107ebcef6ca8e4309070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.642578  301866 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28
	I1216 03:06:27.642594  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 03:06:27.707990  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 ...
	I1216 03:06:27.708025  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28: {Name:mkbf3ef6bfbaa7614efd1e6a67cc5c7d4253e15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.708224  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28 ...
	I1216 03:06:27.708243  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28: {Name:mkbe07cb01c2acb53ae9c637dfbf6702d63a9e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.708359  301866 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt.bec48a28 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt
	I1216 03:06:27.708453  301866 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key.bec48a28 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key
	I1216 03:06:27.708533  301866 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key
	I1216 03:06:27.708551  301866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt with IP's: []
	I1216 03:06:27.846831  301866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt ...
	I1216 03:06:27.846860  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt: {Name:mk41dbdda2eae1a3102527058f8d046993235905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.847043  301866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key ...
	I1216 03:06:27.847062  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key: {Name:mk339b80387c07e7a8ad4a4459a7c29b4085a338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:27.847290  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:27.847365  301866 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:27.847380  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:27.847418  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:27.847453  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:27.847486  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:27.847551  301866 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:27.848366  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:27.872879  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:27.896282  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:27.922715  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:27.947311  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 03:06:27.973465  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:28.000775  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:28.026223  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/embed-certs-742794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:06:28.049838  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:28.079320  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:28.109005  301866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:28.134354  301866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:28.151514  301866 ssh_runner.go:195] Run: openssl version
	I1216 03:06:28.159443  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.170164  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:28.180498  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.187987  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.188077  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:28.251319  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:28.265738  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:28.278594  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.296085  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:28.310156  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.316670  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.316750  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:28.407757  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:28.418774  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:28.438485  301866 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.449298  301866 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:28.459761  301866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.465454  301866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.465521  301866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:28.517487  301866 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:28.528230  301866 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:28.536473  301866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:28.541410  301866 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:28.541465  301866 kubeadm.go:401] StartCluster: {Name:embed-certs-742794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-742794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:28.541628  301866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:28.541695  301866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:28.585896  301866 cri.go:89] found id: ""
	I1216 03:06:28.585974  301866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:28.597446  301866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:28.608323  301866 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:28.608388  301866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:28.623512  301866 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:28.623545  301866 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:28.623591  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:28.641019  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:28.641078  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:28.653179  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:28.664616  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:28.664768  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:28.674755  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:28.685378  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:28.685451  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:28.694813  301866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:28.709080  301866 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:28.709198  301866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:28.722080  301866 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:28.780698  301866 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:06:28.780767  301866 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:06:28.809967  301866 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:06:28.810073  301866 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:06:28.810296  301866 kubeadm.go:319] OS: Linux
	I1216 03:06:28.810444  301866 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:06:28.810557  301866 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:06:28.810631  301866 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:06:28.810698  301866 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:06:28.810798  301866 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:06:28.810906  301866 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:06:28.811006  301866 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:06:28.811082  301866 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:06:28.894637  301866 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:06:28.894752  301866 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:06:28.894914  301866 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:06:28.903719  301866 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:06:28.908115  301866 out.go:252]   - Generating certificates and keys ...
	I1216 03:06:28.908322  301866 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:06:28.908433  301866 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:06:29.178936  301866 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:06:29.427885  301866 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:06:29.506466  301866 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:06:29.667271  301866 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:06:30.003721  301866 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:06:30.004014  301866 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-742794 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	W1216 03:06:28.031563  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:30.526077  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:27.401768  305678 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.937980633s)
	I1216 03:06:27.401801  305678 kic.go:203] duration metric: took 3.938128444s to extract preloaded images to volume ...
	W1216 03:06:27.401981  305678 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:27.402026  305678 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:27.402073  305678 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:27.488021  305678 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-646016 --name auto-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-646016 --network auto-646016 --ip 192.168.94.2 --volume auto-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:27.916470  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Running}}
	I1216 03:06:27.941981  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:27.965737  305678 cli_runner.go:164] Run: docker exec auto-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:28.030556  305678 oci.go:144] the created container "auto-646016" has a running status.
	I1216 03:06:28.030604  305678 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa...
	I1216 03:06:28.250169  305678 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:28.290375  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:28.330295  305678 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:28.330378  305678 kic_runner.go:114] Args: [docker exec --privileged auto-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:28.544452  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:28.571342  305678 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:28.571447  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:28.598121  305678 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:28.598529  305678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 03:06:28.598549  305678 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:28.765433  305678 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-646016
	
	I1216 03:06:28.765466  305678 ubuntu.go:182] provisioning hostname "auto-646016"
	I1216 03:06:28.765536  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:28.791412  305678 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:28.791759  305678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 03:06:28.791778  305678 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-646016 && echo "auto-646016" | sudo tee /etc/hostname
	I1216 03:06:28.955843  305678 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-646016
	
	I1216 03:06:28.955928  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:28.976908  305678 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:28.977240  305678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 03:06:28.977267  305678 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:29.118602  305678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:29.118645  305678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:29.118679  305678 ubuntu.go:190] setting up certificates
	I1216 03:06:29.118690  305678 provision.go:84] configureAuth start
	I1216 03:06:29.118774  305678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646016
	I1216 03:06:29.140474  305678 provision.go:143] copyHostCerts
	I1216 03:06:29.140537  305678 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:29.140547  305678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:29.140616  305678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:29.140936  305678 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:29.140979  305678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:29.141031  305678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:29.141136  305678 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:29.141157  305678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:29.141196  305678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:29.141292  305678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.auto-646016 san=[127.0.0.1 192.168.94.2 auto-646016 localhost minikube]
	I1216 03:06:29.379555  305678 provision.go:177] copyRemoteCerts
	I1216 03:06:29.379675  305678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:29.379734  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:29.400227  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:29.505409  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:06:29.526596  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:29.545035  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 03:06:29.562187  305678 provision.go:87] duration metric: took 443.479302ms to configureAuth
	I1216 03:06:29.562211  305678 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:29.562378  305678 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:29.562486  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:29.580950  305678 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:29.581162  305678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 03:06:29.581177  305678 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:29.871324  305678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:29.871351  305678 machine.go:97] duration metric: took 1.29998343s to provisionDockerMachine
	I1216 03:06:29.871363  305678 client.go:176] duration metric: took 7.33491459s to LocalClient.Create
	I1216 03:06:29.871386  305678 start.go:167] duration metric: took 7.33499095s to libmachine.API.Create "auto-646016"
	I1216 03:06:29.871398  305678 start.go:293] postStartSetup for "auto-646016" (driver="docker")
	I1216 03:06:29.871411  305678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:29.871479  305678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:29.871525  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:29.891802  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:29.993811  305678 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:29.997876  305678 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:29.997910  305678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:29.997922  305678 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:29.997981  305678 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:29.998082  305678 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:29.998248  305678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:30.007518  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:30.030928  305678 start.go:296] duration metric: took 159.515274ms for postStartSetup
	I1216 03:06:30.031346  305678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646016
	I1216 03:06:30.050753  305678 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/config.json ...
	I1216 03:06:30.051020  305678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:30.051059  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:30.072495  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:30.174470  305678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:30.179459  305678 start.go:128] duration metric: took 7.64624628s to createHost
	I1216 03:06:30.179488  305678 start.go:83] releasing machines lock for "auto-646016", held for 7.646413559s
	I1216 03:06:30.179559  305678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-646016
	I1216 03:06:30.211045  305678 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:30.211102  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:30.213976  305678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:30.214043  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:30.241548  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:30.241843  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:30.401542  305678 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:30.409927  305678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:30.448868  305678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:30.454373  305678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:30.454434  305678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:30.481707  305678 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:30.481734  305678 start.go:496] detecting cgroup driver to use...
	I1216 03:06:30.481767  305678 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:30.481813  305678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:30.500491  305678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:30.514918  305678 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:30.514976  305678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:30.532894  305678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:30.552748  305678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:30.658021  305678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:30.747763  305678 docker.go:234] disabling docker service ...
	I1216 03:06:30.747857  305678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:30.766747  305678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:30.781048  305678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:30.882756  305678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:30.975290  305678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:30.988920  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:31.004840  305678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:31.004926  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.016492  305678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:31.016560  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.026978  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.038130  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.048847  305678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:31.057355  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.066393  305678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.081841  305678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:31.090647  305678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:31.099128  305678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:31.106828  305678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:31.193613  305678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:06:31.343984  305678 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:31.344051  305678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:31.348419  305678 start.go:564] Will wait 60s for crictl version
	I1216 03:06:31.348486  305678 ssh_runner.go:195] Run: which crictl
	I1216 03:06:31.353197  305678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:31.383536  305678 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:31.383625  305678 ssh_runner.go:195] Run: crio --version
	I1216 03:06:31.414731  305678 ssh_runner.go:195] Run: crio --version
	I1216 03:06:31.447913  305678 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:31.449589  305678 cli_runner.go:164] Run: docker network inspect auto-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:31.469378  305678 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:31.474211  305678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:31.485687  305678 kubeadm.go:884] updating cluster {Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:31.485854  305678 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:31.485909  305678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:31.522079  305678 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:31.522108  305678 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:31.522175  305678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:31.559142  305678 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:31.559171  305678 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:31.559181  305678 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:31.559289  305678 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:06:31.559400  305678 ssh_runner.go:195] Run: crio config
	I1216 03:06:31.613218  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:31.613260  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:31.613284  305678 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:31.613315  305678 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-646016 NodeName:auto-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:31.613482  305678 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:31.613576  305678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:31.622349  305678 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:31.622412  305678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:31.631742  305678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1216 03:06:31.646270  305678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:31.666202  305678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1216 03:06:31.680404  305678 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:31.684478  305678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:31.695980  305678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:31.779129  305678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:31.803476  305678 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016 for IP: 192.168.94.2
	I1216 03:06:31.803495  305678 certs.go:195] generating shared ca certs ...
	I1216 03:06:31.803514  305678 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:31.803696  305678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:31.803748  305678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:31.803763  305678 certs.go:257] generating profile certs ...
	I1216 03:06:31.803848  305678 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.key
	I1216 03:06:31.803868  305678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.crt with IP's: []
	I1216 03:06:31.909063  305678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.crt ...
	I1216 03:06:31.909090  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.crt: {Name:mk55dda372fd93dc13aa10b779584f45b9e4364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:31.909281  305678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.key ...
	I1216 03:06:31.909295  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/client.key: {Name:mkd4791b351170240836e828b902b6440274e680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:31.909395  305678 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key.9f6da26e
	I1216 03:06:31.909416  305678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt.9f6da26e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 03:06:31.961811  305678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt.9f6da26e ...
	I1216 03:06:31.961852  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt.9f6da26e: {Name:mk2f53ff6e2444b10102753cbdcfb80aba8894ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:31.962027  305678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key.9f6da26e ...
	I1216 03:06:31.962043  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key.9f6da26e: {Name:mkb1412e2c0c0f5871cf751e273d187b087671eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:31.962127  305678 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt.9f6da26e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt
	I1216 03:06:31.962240  305678 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key.9f6da26e -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key
	I1216 03:06:31.962317  305678 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.key
	I1216 03:06:31.962336  305678 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.crt with IP's: []
	I1216 03:06:32.047748  305678 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.crt ...
	I1216 03:06:32.047775  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.crt: {Name:mk7bcaec7360d266ffc7e53ecbb0a2dc72289095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:32.047964  305678 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.key ...
	I1216 03:06:32.047979  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.key: {Name:mkc57888df04c118f7a3f31be3f1ed551f897a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:32.048194  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:32.048243  305678 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:32.048257  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:32.048296  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:32.048328  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:32.048360  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:32.048416  305678 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:32.049265  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:32.068758  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:32.087611  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:32.108580  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:32.130159  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1216 03:06:32.152225  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:32.172487  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:32.191382  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/auto-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:06:32.212810  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:32.233634  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:32.251992  305678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:32.268940  305678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:32.282696  305678 ssh_runner.go:195] Run: openssl version
	
	
	==> CRI-O <==
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.544801478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.552679214Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dbaef898-6e40-49a4-bd2c-b6f6faf31ce1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.553165594Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5a90c108-90dd-4846-a04b-020611a634bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.557244308Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.55856057Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.56152136Z" level=info msg="Ran pod sandbox 5b5a814b936de6ec27efefe2519d62550fcc818145772b76e94ec8f6834bb770 with infra container: kube-system/kube-proxy-k55dg/POD" id=dbaef898-6e40-49a4-bd2c-b6f6faf31ce1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.562590403Z" level=info msg="Ran pod sandbox f1834151f52bc09f60935191c9f8eed65bba13df68b2e2d6a4a5d3511634ab10 with infra container: kube-system/kindnet-7vnx2/POD" id=5a90c108-90dd-4846-a04b-020611a634bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.565977669Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c7ce3edd-d0e9-4ba5-bb29-946bf80c7158 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.567104215Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=565c2a85-8c3d-4362-8ce8-f9085a969c4d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.567339398Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2938b03d-a5cf-488b-ad74-870bb1005dab name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.569108646Z" level=info msg="Creating container: kube-system/kindnet-7vnx2/kindnet-cni" id=58844da2-024c-4898-a455-d8fb5d91d5bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.569216924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.574481203Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ab53670d-5916-4cfe-ab5c-fdea17f1748e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.586480621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.58725182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.587432478Z" level=info msg="Creating container: kube-system/kube-proxy-k55dg/kube-proxy" id=f26e7032-d020-4e80-b914-7a5e45e9d182 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.58760844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.620673626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.62138726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.659585409Z" level=info msg="Created container a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303: kube-system/kube-proxy-k55dg/kube-proxy" id=f26e7032-d020-4e80-b914-7a5e45e9d182 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.660020081Z" level=info msg="Created container 1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c: kube-system/kindnet-7vnx2/kindnet-cni" id=58844da2-024c-4898-a455-d8fb5d91d5bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.662583391Z" level=info msg="Starting container: 1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c" id=07971559-effe-4648-b8d0-3abc529d7cd3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.664104111Z" level=info msg="Starting container: a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303" id=09ff8509-d076-4d92-a06d-22bc4daac9f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.670115566Z" level=info msg="Started container" PID=1042 containerID=a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303 description=kube-system/kube-proxy-k55dg/kube-proxy id=09ff8509-d076-4d92-a06d-22bc4daac9f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b5a814b936de6ec27efefe2519d62550fcc818145772b76e94ec8f6834bb770
	Dec 16 03:06:26 newest-cni-991316 crio[519]: time="2025-12-16T03:06:26.671057493Z" level=info msg="Started container" PID=1037 containerID=1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c description=kube-system/kindnet-7vnx2/kindnet-cni id=07971559-effe-4648-b8d0-3abc529d7cd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1834151f52bc09f60935191c9f8eed65bba13df68b2e2d6a4a5d3511634ab10
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a30488f17adf4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   5b5a814b936de       kube-proxy-k55dg                            kube-system
	1627d3a312598       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   f1834151f52bc       kindnet-7vnx2                               kube-system
	9e8bbaa71c603       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   9 seconds ago       Running             kube-scheduler            1                   3156b49ba7aec       kube-scheduler-newest-cni-991316            kube-system
	0c28c0cfc004d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   9 seconds ago       Running             kube-controller-manager   1                   94e6271e4fb63       kube-controller-manager-newest-cni-991316   kube-system
	8b18d0a9af326       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   9 seconds ago       Running             etcd                      1                   c53478c467e71       etcd-newest-cni-991316                      kube-system
	d5dff5ae5810d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   9 seconds ago       Running             kube-apiserver            1                   f04b9706e49e0       kube-apiserver-newest-cni-991316            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-991316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-991316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=newest-cni-991316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-991316
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 03:06:25 +0000   Tue, 16 Dec 2025 03:05:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-991316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                58335f55-1f55-4122-b10c-c1f511a1797b
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-991316                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-7vnx2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-991316             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-991316    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-k55dg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-991316             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node newest-cni-991316 event: Registered Node newest-cni-991316 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-991316 event: Registered Node newest-cni-991316 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [8b18d0a9af326b9eb1103dc3d046d1ec2ec745aaf68662fc1898a5226313f65f] <==
	{"level":"warn","ts":"2025-12-16T03:06:26.292945Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:25.974869Z","time spent":"318.066833ms","remote":"127.0.0.1:33696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":5080,"request content":"key:\"/registry/pods/kube-system/kube-proxy-k55dg\" limit:1 "}
	{"level":"info","ts":"2025-12-16T03:06:26.292964Z","caller":"traceutil/trace.go:172","msg":"trace[1599206036] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-7vnx2; range_end:; response_count:1; response_revision:436; }","duration":"158.358742ms","start":"2025-12-16T03:06:26.134597Z","end":"2025-12-16T03:06:26.292955Z","steps":["trace[1599206036] 'agreement among raft nodes before linearized reading'  (duration: 158.240227ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.292967Z","caller":"traceutil/trace.go:172","msg":"trace[1064310780] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"316.465699ms","start":"2025-12-16T03:06:25.976484Z","end":"2025-12-16T03:06:26.292950Z","steps":["trace[1064310780] 'process raft request'  (duration: 259.095015ms)","trace[1064310780] 'compare'  (duration: 57.126906ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.293392Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:25.976466Z","time spent":"316.885056ms","remote":"127.0.0.1:33530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":686,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/newest-cni-991316.1881933483dc9764\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/newest-cni-991316.1881933483dc9764\" value_size:609 lease:6414985302981273270 >> failure:<>"}
	{"level":"warn","ts":"2025-12-16T03:06:26.293119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.966978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:5009"}
	{"level":"info","ts":"2025-12-16T03:06:26.293544Z","caller":"traceutil/trace.go:172","msg":"trace[1405788312] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-991316; range_end:; response_count:1; response_revision:436; }","duration":"153.390134ms","start":"2025-12-16T03:06:26.140143Z","end":"2025-12-16T03:06:26.293533Z","steps":["trace[1405788312] 'agreement among raft nodes before linearized reading'  (duration: 152.901546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.293123Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.669922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-12-16T03:06:26.293680Z","caller":"traceutil/trace.go:172","msg":"trace[722680565] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:436; }","duration":"141.231239ms","start":"2025-12-16T03:06:26.152438Z","end":"2025-12-16T03:06:26.293669Z","steps":["trace[722680565] 'agreement among raft nodes before linearized reading'  (duration: 140.440135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.293200Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.276646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:7914"}
	{"level":"info","ts":"2025-12-16T03:06:26.294871Z","caller":"traceutil/trace.go:172","msg":"trace[428045109] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-991316; range_end:; response_count:1; response_revision:436; }","duration":"158.937235ms","start":"2025-12-16T03:06:26.135917Z","end":"2025-12-16T03:06:26.294854Z","steps":["trace[428045109] 'agreement among raft nodes before linearized reading'  (duration: 157.230598ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.431862Z","caller":"traceutil/trace.go:172","msg":"trace[490841297] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"132.960907ms","start":"2025-12-16T03:06:26.298877Z","end":"2025-12-16T03:06:26.431838Z","steps":["trace[490841297] 'process raft request'  (duration: 124.341666ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.432157Z","caller":"traceutil/trace.go:172","msg":"trace[1288058902] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"127.103244ms","start":"2025-12-16T03:06:26.305036Z","end":"2025-12-16T03:06:26.432139Z","steps":["trace[1288058902] 'process raft request'  (duration: 126.754351ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.432125Z","caller":"traceutil/trace.go:172","msg":"trace[1721760253] transaction","detail":"{read_only:false; number_of_response:0; response_revision:438; }","duration":"130.206464ms","start":"2025-12-16T03:06:26.301904Z","end":"2025-12-16T03:06:26.432110Z","steps":["trace[1721760253] 'process raft request'  (duration: 129.845531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.543369Z","caller":"traceutil/trace.go:172","msg":"trace[1952504786] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:462; }","duration":"104.25026ms","start":"2025-12-16T03:06:26.439096Z","end":"2025-12-16T03:06:26.543346Z","steps":["trace[1952504786] 'read index received'  (duration: 104.241917ms)","trace[1952504786] 'applied index is now lower than readState.Index'  (duration: 7.216µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.553670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.555609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-991316.1881933483dc6375\" limit:1 ","response":"range_response_count:1 size:705"}
	{"level":"info","ts":"2025-12-16T03:06:26.554766Z","caller":"traceutil/trace.go:172","msg":"trace[1002969381] range","detail":"{range_begin:/registry/events/default/newest-cni-991316.1881933483dc6375; range_end:; response_count:1; response_revision:439; }","duration":"115.661248ms","start":"2025-12-16T03:06:26.439091Z","end":"2025-12-16T03:06:26.554752Z","steps":["trace[1002969381] 'agreement among raft nodes before linearized reading'  (duration: 104.360259ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.554025Z","caller":"traceutil/trace.go:172","msg":"trace[1102162955] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"115.799113ms","start":"2025-12-16T03:06:26.438210Z","end":"2025-12-16T03:06:26.554009Z","steps":["trace[1102162955] 'process raft request'  (duration: 105.281519ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.554331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.669723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-12-16T03:06:26.555100Z","caller":"traceutil/trace.go:172","msg":"trace[1119378501] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:440; }","duration":"105.438917ms","start":"2025-12-16T03:06:26.449646Z","end":"2025-12-16T03:06:26.555085Z","steps":["trace[1119378501] 'agreement among raft nodes before linearized reading'  (duration: 104.622634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.554366Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.804372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-16T03:06:26.554419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.412099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:7448"}
	{"level":"warn","ts":"2025-12-16T03:06:26.554472Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.892234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-newest-cni-991316\" limit:1 ","response":"range_response_count:1 size:5976"}
	{"level":"info","ts":"2025-12-16T03:06:26.556108Z","caller":"traceutil/trace.go:172","msg":"trace[451007478] range","detail":"{range_begin:/registry/pods/kube-system/etcd-newest-cni-991316; range_end:; response_count:1; response_revision:440; }","duration":"116.519553ms","start":"2025-12-16T03:06:26.439576Z","end":"2025-12-16T03:06:26.556095Z","steps":["trace[451007478] 'agreement among raft nodes before linearized reading'  (duration: 114.85819ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.556440Z","caller":"traceutil/trace.go:172","msg":"trace[1720046198] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:440; }","duration":"106.872228ms","start":"2025-12-16T03:06:26.449558Z","end":"2025-12-16T03:06:26.556430Z","steps":["trace[1720046198] 'agreement among raft nodes before linearized reading'  (duration: 104.79226ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.557748Z","caller":"traceutil/trace.go:172","msg":"trace[1772874145] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-991316; range_end:; response_count:1; response_revision:440; }","duration":"117.730576ms","start":"2025-12-16T03:06:26.440002Z","end":"2025-12-16T03:06:26.557732Z","steps":["trace[1772874145] 'agreement among raft nodes before linearized reading'  (duration: 114.373729ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:06:33 up 49 min,  0 user,  load average: 5.28, 3.35, 2.10
	Linux newest-cni-991316 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1627d3a312598fcbc1f789b9122aa9f322fff636831b56c96e8e80fb26bf2f8c] <==
	I1216 03:06:26.892131       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:26.892636       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 03:06:26.892783       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:26.892801       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:26.892843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:27.189149       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:27.189200       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:27.189213       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:27.189372       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:27.589351       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:27.590083       1 metrics.go:72] Registering metrics
	I1216 03:06:27.590187       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d5dff5ae5810d412e8907ca08c813052e5139b282f71ec0fa1e0c388545594ef] <==
	I1216 03:06:25.549016       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:06:25.549047       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:06:25.549066       1 aggregator.go:187] initial CRD sync complete...
	I1216 03:06:25.549080       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:06:25.549079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:06:25.549090       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:25.549086       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:06:25.549142       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:06:25.549300       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:06:25.559327       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:06:25.561665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:06:25.574051       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:25.574080       1 policy_source.go:248] refreshing policies
	I1216 03:06:25.581253       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:06:25.974454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:06:26.297792       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:06:26.622837       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 03:06:26.695728       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:06:26.739581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:06:26.755975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:06:26.843867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.77.8"}
	I1216 03:06:26.860013       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.86.21"}
	I1216 03:06:29.104688       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:06:29.154335       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:06:29.203860       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0c28c0cfc004d90699e4e87cdd35e0b26b6c417656ded6b7c595335d959d33dc] <==
	I1216 03:06:28.717023       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716883       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716899       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717032       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.714668       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717056       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.717040       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.716972       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.719443       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1216 03:06:28.720238       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-991316"
	I1216 03:06:28.720348       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 03:06:28.720274       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.721965       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.722014       1 range_allocator.go:177] "Sending events to api server"
	I1216 03:06:28.722049       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1216 03:06:28.722062       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:28.722068       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.722504       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.725363       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.725883       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.734783       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:28.811037       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:28.811061       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 03:06:28.811070       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 03:06:28.835927       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [a30488f17adf4be9a55eb4eef209c6a9fb81fd027dcefa189f2beca3f20e1303] <==
	I1216 03:06:26.748843       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:26.827248       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:26.927665       1 shared_informer.go:377] "Caches are synced"
	I1216 03:06:26.927732       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 03:06:26.927896       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:26.964561       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:26.964676       1 server_linux.go:136] "Using iptables Proxier"
	I1216 03:06:26.977772       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:26.978159       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 03:06:26.978403       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:26.980658       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:26.980677       1 config.go:200] "Starting service config controller"
	I1216 03:06:26.980691       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:26.980704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:26.981244       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:26.982151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:26.981390       1 config.go:309] "Starting node config controller"
	I1216 03:06:26.982256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:26.982288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:27.081616       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:06:27.081628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:06:27.082965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9e8bbaa71c603609c449dee8ce46d5c12489f28238ea9376f424476c5cbd1af3] <==
	I1216 03:06:23.484928       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:06:25.488444       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:06:25.488579       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1216 03:06:25.488595       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:06:25.488604       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:06:25.507170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 03:06:25.507201       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:25.510032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:25.510066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 03:06:25.510195       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:06:25.510263       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:06:25.610641       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.785910     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-991316" containerName="kube-apiserver"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.786935     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-991316" containerName="kube-scheduler"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.787050     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-991316" containerName="kube-controller-manager"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.787208     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-991316" containerName="etcd"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.812623     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-991316\" already exists" pod="kube-system/kube-scheduler-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.812705     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827843     663 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827941     663 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.827984     663 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: E1216 03:06:25.831587     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-991316\" already exists" pod="kube-system/etcd-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.831765     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-991316"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.833048     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.839739     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868246     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-lib-modules\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868296     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-lib-modules\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868339     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-cni-cfg\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868369     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693caa56-221c-4967-b459-24c95a6f228b-xtables-lock\") pod \"kindnet-7vnx2\" (UID: \"693caa56-221c-4967-b459-24c95a6f228b\") " pod="kube-system/kindnet-7vnx2"
	Dec 16 03:06:25 newest-cni-991316 kubelet[663]: I1216 03:06:25.868404     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dcf431e-16a0-4327-b437-ad2b0b7cbea0-xtables-lock\") pod \"kube-proxy-k55dg\" (UID: \"3dcf431e-16a0-4327-b437-ad2b0b7cbea0\") " pod="kube-system/kube-proxy-k55dg"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: E1216 03:06:26.298455     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-991316\" already exists" pod="kube-system/kube-apiserver-newest-cni-991316"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: I1216 03:06:26.298510     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-991316"
	Dec 16 03:06:26 newest-cni-991316 kubelet[663]: E1216 03:06:26.566449     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-991316\" already exists" pod="kube-system/kube-controller-manager-newest-cni-991316"
	Dec 16 03:06:28 newest-cni-991316 kubelet[663]: I1216 03:06:28.827253     663 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:06:28 newest-cni-991316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-991316 -n newest-cni-991316
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-991316 -n newest-cni-991316: exit status 2 (357.441037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-991316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx: exit status 1 (73.508788ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-86ggg" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-r7zbq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kdkrx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-991316 describe pod coredns-7d764666f9-86ggg storage-provisioner dashboard-metrics-scraper-867fb5f87b-r7zbq kubernetes-dashboard-b84665fb8-kdkrx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.978872ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-742794 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-742794 describe deploy/metrics-server -n kube-system: exit status 1 (60.648123ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-742794 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-742794
helpers_test.go:244: (dbg) docker inspect embed-certs-742794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	        "Created": "2025-12-16T03:06:20.456549573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:06:20.516435243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hosts",
	        "LogPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3-json.log",
	        "Name": "/embed-certs-742794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-742794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-742794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	                "LowerDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-742794",
	                "Source": "/var/lib/docker/volumes/embed-certs-742794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-742794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-742794",
	                "name.minikube.sigs.k8s.io": "embed-certs-742794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2d3b37c6691ef7a8ab0ecbb291fa3eaf14612cf60f041df4f2c4519d6e9d2648",
	            "SandboxKey": "/var/run/docker/netns/2d3b37c6691e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-742794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "698574664c58f66fc30ac38bce099a4a38e50897a8947172848cad9a06889288",
	                    "EndpointID": "ac465ec688ba622e796205c8396d07d89f8c61fd6b8b76bcc028bc685e5c3e0a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "26:bb:01:27:5e:74",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-742794",
	                        "913c75f545a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25: (1.046471633s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ image   │ old-k8s-version-073001 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:07 UTC │
	│ image   │ newest-cni-991316 image list --format=json                                                                                                                                                                                                           │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p newest-cni-991316 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p kindnet-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-646016               │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ ssh     │ -p auto-646016 pgrep -a kubelet                                                                                                                                                                                                                      │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ image   │ default-k8s-diff-port-079165 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ pause   │ -p default-k8s-diff-port-079165 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:36.912506  311649 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:36.912641  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912649  311649 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:36.912656  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912959  311649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:36.914248  311649 out.go:368] Setting JSON to false
	I1216 03:06:36.915985  311649 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2949,"bootTime":1765851448,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:36.916062  311649 start.go:143] virtualization: kvm guest
	I1216 03:06:36.918316  311649 out.go:179] * [kindnet-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:36.921321  311649 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:36.921324  311649 notify.go:221] Checking for updates...
	I1216 03:06:36.926057  311649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:36.934596  311649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:36.937150  311649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:36.938685  311649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:36.940325  311649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:36.943016  311649 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943170  311649 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943308  311649 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943452  311649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:36.974200  311649 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:36.974308  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.059528  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.045598159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.059688  311649 docker.go:319] overlay module found
	I1216 03:06:37.062324  311649 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:37.064270  311649 start.go:309] selected driver: docker
	I1216 03:06:37.064290  311649 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:37.064306  311649 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:37.065092  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.134587  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.120781191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.134868  311649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:37.135202  311649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:37.139979  311649 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:37.141298  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:37.141320  311649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:37.141420  311649 start.go:353] cluster config:
	{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:37.142847  311649 out.go:179] * Starting "kindnet-646016" primary control-plane node in "kindnet-646016" cluster
	I1216 03:06:37.144033  311649 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:37.145214  311649 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:37.146273  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.146323  311649 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:37.146332  311649 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:37.146381  311649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:37.146438  311649 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:37.146451  311649 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:37.146582  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:37.146609  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json: {Name:mka01fc2d87dd258e9e4215769fc0defca835ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:37.173960  311649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:37.174000  311649 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:37.174018  311649 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:37.174056  311649 start.go:360] acquireMachinesLock for kindnet-646016: {Name:mk5e982439fb31b21f2bf0f14b638469610e2ecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:37.174175  311649 start.go:364] duration metric: took 97.838µs to acquireMachinesLock for "kindnet-646016"
	I1216 03:06:37.174206  311649 start.go:93] Provisioning new machine with config: &{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:37.174307  311649 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:32.289938  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.297659  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:32.306317  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310169  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310225  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.358310  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:32.366800  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:32.374925  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.382691  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:32.390401  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394611  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394661  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.433920  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:32.442904  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:32.452551  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.460567  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:32.468254  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472142  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472194  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.512960  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.521828  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.531306  305678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:32.535264  305678 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:32.535327  305678 kubeadm.go:401] StartCluster: {Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:32.535422  305678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:32.535487  305678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:32.570545  305678 cri.go:89] found id: ""
	I1216 03:06:32.570617  305678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:32.580361  305678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:32.590036  305678 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:32.590101  305678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:32.600310  305678 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:32.600328  305678 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:32.600380  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:32.611364  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:32.611434  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:32.621528  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:32.630592  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:32.630691  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:32.639135  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.647615  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:32.647672  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.655556  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:32.663704  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:32.663751  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:32.671103  305678 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:32.732749  305678 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:32.798205  305678 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:36.811045  301866 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.343509782s
	I1216 03:06:37.324341  301866 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.856445935s
	I1216 03:06:38.970006  301866 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502495893s
	I1216 03:06:38.987567  301866 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:38.999896  301866 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:39.008632  301866 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:39.008951  301866 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-742794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:39.018346  301866 kubeadm.go:319] [bootstrap-token] Using token: jt3t6c.ftosdk62dr4hq8nx
	I1216 03:06:39.020229  301866 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:39.020406  301866 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:39.023717  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:39.030138  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:39.032812  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:39.035589  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:39.040407  301866 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:39.376310  301866 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:39.798064  301866 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:40.387055  301866 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:40.388094  301866 kubeadm.go:319] 
	I1216 03:06:40.388196  301866 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:40.388227  301866 kubeadm.go:319] 
	I1216 03:06:40.388343  301866 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:40.388356  301866 kubeadm.go:319] 
	I1216 03:06:40.388385  301866 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:40.388525  301866 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:40.388619  301866 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:40.388630  301866 kubeadm.go:319] 
	I1216 03:06:40.388735  301866 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:40.388751  301866 kubeadm.go:319] 
	I1216 03:06:40.388846  301866 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:40.388859  301866 kubeadm.go:319] 
	I1216 03:06:40.388922  301866 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:40.388986  301866 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:40.389039  301866 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:40.389047  301866 kubeadm.go:319] 
	I1216 03:06:40.389159  301866 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:40.389224  301866 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:40.389230  301866 kubeadm.go:319] 
	I1216 03:06:40.389294  301866 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389377  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:40.389395  301866 kubeadm.go:319] 	--control-plane 
	I1216 03:06:40.389400  301866 kubeadm.go:319] 
	I1216 03:06:40.389478  301866 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:40.389487  301866 kubeadm.go:319] 
	I1216 03:06:40.389595  301866 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389778  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:40.392758  301866 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:40.392974  301866 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:40.393004  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:40.393011  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:40.488426  301866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:37.030102  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:39.526744  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:37.176299  311649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:06:37.176572  311649 start.go:159] libmachine.API.Create for "kindnet-646016" (driver="docker")
	I1216 03:06:37.176609  311649 client.go:173] LocalClient.Create starting
	I1216 03:06:37.176683  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:06:37.176734  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176758  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.176868  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:06:37.176934  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176955  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.177346  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:06:37.198035  311649 cli_runner.go:211] docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:06:37.198117  311649 network_create.go:284] running [docker network inspect kindnet-646016] to gather additional debugging logs...
	I1216 03:06:37.198140  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016
	W1216 03:06:37.217351  311649 cli_runner.go:211] docker network inspect kindnet-646016 returned with exit code 1
	I1216 03:06:37.217385  311649 network_create.go:287] error running [docker network inspect kindnet-646016]: docker network inspect kindnet-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-646016 not found
	I1216 03:06:37.217404  311649 network_create.go:289] output of [docker network inspect kindnet-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-646016 not found
	
	** /stderr **
	I1216 03:06:37.217553  311649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:37.239137  311649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:06:37.240088  311649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:06:37.241036  311649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:06:37.242047  311649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7bbc0}
	I1216 03:06:37.242076  311649 network_create.go:124] attempt to create docker network kindnet-646016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:06:37.242129  311649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-646016 kindnet-646016
	I1216 03:06:37.303813  311649 network_create.go:108] docker network kindnet-646016 192.168.76.0/24 created
	I1216 03:06:37.303878  311649 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-646016" container
	I1216 03:06:37.303960  311649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:06:37.326233  311649 cli_runner.go:164] Run: docker volume create kindnet-646016 --label name.minikube.sigs.k8s.io=kindnet-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:06:37.345781  311649 oci.go:103] Successfully created a docker volume kindnet-646016
	I1216 03:06:37.345884  311649 cli_runner.go:164] Run: docker run --rm --name kindnet-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --entrypoint /usr/bin/test -v kindnet-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:06:37.826587  311649 oci.go:107] Successfully prepared a docker volume kindnet-646016
	I1216 03:06:37.826662  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.826680  311649 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:06:37.826753  311649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:06:42.492370  305678 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:06:42.492457  305678 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:06:42.492585  305678 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:06:42.492655  305678 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:06:42.492702  305678 kubeadm.go:319] OS: Linux
	I1216 03:06:42.492792  305678 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:06:42.492885  305678 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:06:42.492953  305678 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:06:42.493065  305678 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:06:42.493139  305678 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:06:42.493206  305678 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:06:42.493274  305678 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:06:42.493336  305678 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:06:42.493440  305678 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:06:42.493521  305678 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:06:42.493648  305678 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:06:42.493769  305678 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:06:42.494971  305678 out.go:252]   - Generating certificates and keys ...
	I1216 03:06:42.495073  305678 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:06:42.495136  305678 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:06:42.495239  305678 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:06:42.495320  305678 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:06:42.495390  305678 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:06:42.495471  305678 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:06:42.495555  305678 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:06:42.495710  305678 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.495789  305678 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:06:42.495956  305678 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.496049  305678 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:06:42.496141  305678 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:06:42.496209  305678 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:06:42.496297  305678 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:06:42.496386  305678 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:06:42.496480  305678 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:06:42.496551  305678 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:06:42.496644  305678 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:06:42.496722  305678 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:06:42.496861  305678 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:06:42.496960  305678 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:06:42.498424  305678 out.go:252]   - Booting up control plane ...
	I1216 03:06:42.498537  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:06:42.498665  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:06:42.498728  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:06:42.498847  305678 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:06:42.498988  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:06:42.499152  305678 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:06:42.499290  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:06:42.499345  305678 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:06:42.499657  305678 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:06:42.499788  305678 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:06:42.499885  305678 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.112324ms
	I1216 03:06:42.500041  305678 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:06:42.500173  305678 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 03:06:42.500323  305678 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:06:42.500442  305678 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:06:42.500546  305678 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.175318386s
	I1216 03:06:42.500649  305678 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.4004376s
	I1216 03:06:42.500732  305678 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501249222s
	I1216 03:06:42.500884  305678 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:42.501003  305678 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:42.501081  305678 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:42.501327  305678 kubeadm.go:319] [mark-control-plane] Marking the node auto-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:42.501376  305678 kubeadm.go:319] [bootstrap-token] Using token: lvkpe0.dg8z2fbad7xa25ob
	I1216 03:06:42.502851  305678 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:42.502987  305678 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:42.503101  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:42.503288  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:42.503482  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:42.503640  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:42.503758  305678 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:42.503965  305678 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:42.504037  305678 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:42.504108  305678 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:42.504119  305678 kubeadm.go:319] 
	I1216 03:06:42.504203  305678 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:42.504215  305678 kubeadm.go:319] 
	I1216 03:06:42.504329  305678 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:42.504345  305678 kubeadm.go:319] 
	I1216 03:06:42.504395  305678 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:42.504479  305678 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:42.504568  305678 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:42.504579  305678 kubeadm.go:319] 
	I1216 03:06:42.504668  305678 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:42.504683  305678 kubeadm.go:319] 
	I1216 03:06:42.504765  305678 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:42.504775  305678 kubeadm.go:319] 
	I1216 03:06:42.504864  305678 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:42.504998  305678 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:42.505082  305678 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:42.505091  305678 kubeadm.go:319] 
	I1216 03:06:42.505215  305678 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:42.505315  305678 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:42.505323  305678 kubeadm.go:319] 
	I1216 03:06:42.505423  305678 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505558  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:42.505584  305678 kubeadm.go:319] 	--control-plane 
	I1216 03:06:42.505592  305678 kubeadm.go:319] 
	I1216 03:06:42.505680  305678 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:42.505686  305678 kubeadm.go:319] 
	I1216 03:06:42.505749  305678 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505864  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:42.505877  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:42.505884  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:42.507282  305678 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:06:40.556500  301866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:40.561584  301866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:40.561613  301866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:40.577774  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:41.613918  301866 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.036089237s)
	I1216 03:06:41.613972  301866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:41.614150  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:41.614173  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-742794 minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=embed-certs-742794 minikube.k8s.io/primary=true
	I1216 03:06:41.626342  301866 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:41.845142  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.345943  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.845105  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.345902  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.845135  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.345102  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.846051  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.345989  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.416661  301866 kubeadm.go:1114] duration metric: took 3.802575761s to wait for elevateKubeSystemPrivileges
	I1216 03:06:45.416708  301866 kubeadm.go:403] duration metric: took 16.875245445s to StartCluster
	I1216 03:06:45.416731  301866 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.416953  301866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:45.418953  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.419173  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:45.419182  301866 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:45.419261  301866 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:45.419359  301866 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-742794"
	I1216 03:06:45.419381  301866 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-742794"
	I1216 03:06:45.419396  301866 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:45.419414  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.419459  301866 addons.go:70] Setting default-storageclass=true in profile "embed-certs-742794"
	I1216 03:06:45.419480  301866 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-742794"
	I1216 03:06:45.419894  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.420161  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.424569  301866 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:45.425946  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:45.449105  301866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1216 03:06:42.026493  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:44.525591  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:45.450234  301866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.450254  301866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:45.450315  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.450918  301866 addons.go:239] Setting addon default-storageclass=true in "embed-certs-742794"
	I1216 03:06:45.451884  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.452391  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.474794  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.477242  301866 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.477258  301866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:45.477348  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.507412  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.532004  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:45.601352  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.618429  301866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:45.642176  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.751205  301866 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:45.926484  301866 node_ready.go:35] waiting up to 6m0s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:45.931875  301866 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:42.187278  311649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.360463421s)
	I1216 03:06:42.187316  311649 kic.go:203] duration metric: took 4.360631679s to extract preloaded images to volume ...
	W1216 03:06:42.187436  311649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:42.187482  311649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:42.187655  311649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:42.264475  311649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-646016 --name kindnet-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-646016 --network kindnet-646016 --ip 192.168.76.2 --volume kindnet-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:42.589318  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Running}}
	I1216 03:06:42.613344  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.636793  311649 cli_runner.go:164] Run: docker exec kindnet-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:42.692951  311649 oci.go:144] the created container "kindnet-646016" has a running status.
	I1216 03:06:42.693027  311649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa...
	I1216 03:06:42.723209  311649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:42.759298  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.788064  311649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:42.788107  311649 kic_runner.go:114] Args: [docker exec --privileged kindnet-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:42.841532  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.870136  311649 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:42.870241  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:42.900132  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:42.900484  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:42.900507  311649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:42.901354  311649 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55522->127.0.0.1:33109: read: connection reset by peer
	I1216 03:06:46.051362  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.051391  311649 ubuntu.go:182] provisioning hostname "kindnet-646016"
	I1216 03:06:46.051471  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.071710  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.072035  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.072054  311649 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-646016 && echo "kindnet-646016" | sudo tee /etc/hostname
	I1216 03:06:46.229313  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.229390  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.250802  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.251099  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.251120  311649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:46.394197  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:46.394227  311649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:46.394258  311649 ubuntu.go:190] setting up certificates
	I1216 03:06:46.394271  311649 provision.go:84] configureAuth start
	I1216 03:06:46.394331  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:46.416666  311649 provision.go:143] copyHostCerts
	I1216 03:06:46.416740  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:46.416755  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:46.416885  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:46.417042  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:46.417058  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:46.417120  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:46.417250  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:46.417265  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:46.417314  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:46.417441  311649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.kindnet-646016 san=[127.0.0.1 192.168.76.2 kindnet-646016 localhost minikube]
	I1216 03:06:46.669146  311649 provision.go:177] copyRemoteCerts
	I1216 03:06:46.669199  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:46.669229  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.689779  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:46.791881  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:06:46.813593  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:46.832367  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 03:06:46.850692  311649 provision.go:87] duration metric: took 456.406984ms to configureAuth
	I1216 03:06:46.850726  311649 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:46.850934  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.851035  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.871285  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.871493  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.871507  311649 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:42.508558  305678 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:42.513406  305678 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:42.513425  305678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:42.529253  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:42.791486  305678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:42.791569  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.791628  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646016 minikube.k8s.io/updated_at=2025_12_16T03_06_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-646016 minikube.k8s.io/primary=true
	I1216 03:06:42.804265  305678 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:42.902143  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.402756  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.903006  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.402268  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.902852  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.403072  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.902749  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.403233  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.902362  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.975382  305678 kubeadm.go:1114] duration metric: took 4.183882801s to wait for elevateKubeSystemPrivileges
	I1216 03:06:46.975415  305678 kubeadm.go:403] duration metric: took 14.440090912s to StartCluster
	I1216 03:06:46.975437  305678 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.975508  305678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:46.977140  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.977403  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:46.977404  305678 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:46.977486  305678 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:46.977579  305678 addons.go:70] Setting storage-provisioner=true in profile "auto-646016"
	I1216 03:06:46.977599  305678 addons.go:70] Setting default-storageclass=true in profile "auto-646016"
	I1216 03:06:46.977606  305678 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.977650  305678 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-646016"
	I1216 03:06:46.977607  305678 addons.go:239] Setting addon storage-provisioner=true in "auto-646016"
	I1216 03:06:46.977743  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:46.978050  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.978306  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.982308  305678 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:46.983620  305678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:47.002437  305678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:47.002605  305678 addons.go:239] Setting addon default-storageclass=true in "auto-646016"
	I1216 03:06:47.002668  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:47.003259  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:47.003564  305678 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.003579  305678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:47.003634  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.035685  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.038358  305678 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.038384  305678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:47.038454  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.063766  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.081171  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:47.136199  305678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:47.154654  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.183681  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.284544  305678 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:47.286230  305678 node_ready.go:35] waiting up to 15m0s for node "auto-646016" to be "Ready" ...
	I1216 03:06:47.496268  305678 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:47.193617  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:47.194051  311649 machine.go:97] duration metric: took 4.323568124s to provisionDockerMachine
	I1216 03:06:47.194092  311649 client.go:176] duration metric: took 10.017462228s to LocalClient.Create
	I1216 03:06:47.194125  311649 start.go:167] duration metric: took 10.017552786s to libmachine.API.Create "kindnet-646016"
	I1216 03:06:47.194137  311649 start.go:293] postStartSetup for "kindnet-646016" (driver="docker")
	I1216 03:06:47.194157  311649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:47.194247  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:47.194306  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.220949  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.335239  311649 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:47.339735  311649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:47.339764  311649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:47.339779  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:47.339871  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:47.339980  311649 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:47.340094  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:47.348131  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:47.372022  311649 start.go:296] duration metric: took 177.869291ms for postStartSetup
	I1216 03:06:47.372443  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.397221  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:47.397550  311649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:47.397606  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.415859  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.518022  311649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:47.523427  311649 start.go:128] duration metric: took 10.349106383s to createHost
	I1216 03:06:47.523456  311649 start.go:83] releasing machines lock for "kindnet-646016", held for 10.349266687s
	I1216 03:06:47.523530  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.546521  311649 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:47.546578  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.546599  311649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:47.546669  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.570313  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.570302  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.721354  311649 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:47.728115  311649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:47.764096  311649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:47.769332  311649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:47.769416  311649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:47.800234  311649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:47.800264  311649 start.go:496] detecting cgroup driver to use...
	I1216 03:06:47.800299  311649 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:47.800346  311649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:47.816262  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:47.828857  311649 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:47.828917  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:47.846000  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:47.864948  311649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:47.954521  311649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:48.052042  311649 docker.go:234] disabling docker service ...
	I1216 03:06:48.052109  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:48.070097  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:48.084175  311649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:48.172571  311649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:48.260483  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:48.273064  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:48.287395  311649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:48.287445  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.299225  311649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:48.299303  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.308963  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.318151  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.326922  311649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:48.336676  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.346533  311649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.363190  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.372458  311649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:48.380763  311649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:48.388403  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.471564  311649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:06:48.611303  311649 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:48.611368  311649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:48.615409  311649 start.go:564] Will wait 60s for crictl version
	I1216 03:06:48.615453  311649 ssh_runner.go:195] Run: which crictl
	I1216 03:06:48.619372  311649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:48.644746  311649 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:48.644839  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.673737  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.702915  311649 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:45.932861  301866 addons.go:530] duration metric: took 513.595889ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:46.256598  301866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-742794" context rescaled to 1 replicas
	W1216 03:06:47.929443  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:49.930144  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:46.525728  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:48.526032  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:50.526076  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:48.704147  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:48.721392  311649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:48.725738  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.736033  311649 kubeadm.go:884] updating cluster {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:48.736149  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:48.736193  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.766912  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.766931  311649 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:48.766981  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.793469  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.793488  311649 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:48.793496  311649 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:48.793584  311649 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 03:06:48.793668  311649 ssh_runner.go:195] Run: crio config
	I1216 03:06:48.842069  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:48.842093  311649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:48.842113  311649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-646016 NodeName:kindnet-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:48.842278  311649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:48.842350  311649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:48.851041  311649 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:48.851093  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:48.859976  311649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 03:06:48.873334  311649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:48.888764  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 03:06:48.901633  311649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:48.905305  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.915330  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.995098  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:49.027736  311649 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016 for IP: 192.168.76.2
	I1216 03:06:49.027754  311649 certs.go:195] generating shared ca certs ...
	I1216 03:06:49.027769  311649 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.027940  311649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:49.027991  311649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:49.027999  311649 certs.go:257] generating profile certs ...
	I1216 03:06:49.028050  311649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key
	I1216 03:06:49.028069  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt with IP's: []
	I1216 03:06:49.358443  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt ...
	I1216 03:06:49.358470  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt: {Name:mkd8b5e5f321efa7e9844310e79db14d2c69cdf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358640  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key ...
	I1216 03:06:49.358651  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key: {Name:mk0a2ea2343a207eb4a3896019c7d6511f76de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358724  311649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97
	I1216 03:06:49.358739  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:06:49.547719  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 ...
	I1216 03:06:49.547746  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97: {Name:mk0ea02365886ae096b9e5de77c47711b9643fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.547929  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 ...
	I1216 03:06:49.547944  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97: {Name:mke361247d57cd7cd2fc7dc06040d57afdcb0c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.548042  311649 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt
	I1216 03:06:49.548133  311649 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key
	I1216 03:06:49.548195  311649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key
	I1216 03:06:49.548210  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt with IP's: []
	I1216 03:06:49.631433  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt ...
	I1216 03:06:49.631466  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt: {Name:mkad790b016b1279eb196a1c4cb8b1281ceb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631654  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key ...
	I1216 03:06:49.631672  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key: {Name:mk3cdaee6c7ccfd128b07eb42506350a5c451ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631986  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:49.632029  311649 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:49.632038  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:49.632063  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:49.632086  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:49.632113  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:49.632153  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:49.632685  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:49.654165  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:49.673281  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:49.691565  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:49.710072  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:06:49.727451  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:49.745210  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:49.762460  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:06:49.779547  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:49.799563  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:49.817781  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:49.836953  311649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:49.850440  311649 ssh_runner.go:195] Run: openssl version
	I1216 03:06:49.857619  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.865683  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:49.873871  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877892  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877973  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.913978  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:49.921860  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:49.930065  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.937793  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:49.945255  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949134  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949180  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.985209  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:49.993787  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:50.002243  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.011462  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:50.019433  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023581  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023639  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.058987  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.067234  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.074980  311649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:50.078979  311649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:50.079045  311649 kubeadm.go:401] StartCluster: {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:50.079128  311649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:50.079165  311649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:50.107017  311649 cri.go:89] found id: ""
	I1216 03:06:50.107074  311649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:50.115787  311649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:50.124409  311649 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:50.124473  311649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:50.132370  311649 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:50.132387  311649 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:50.132436  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:50.140621  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:50.140678  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:50.148112  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:50.155314  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:50.155365  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:50.163444  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.172463  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:50.172506  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.181207  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:50.189958  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:50.190008  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:50.198269  311649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:50.259675  311649 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:50.322773  311649 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:47.498757  305678 addons.go:530] duration metric: took 521.268615ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:47.789777  305678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-646016" context rescaled to 1 replicas
	W1216 03:06:49.289937  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:51.290033  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:52.430325  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:54.929684  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:53.025423  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:54.025688  296715 pod_ready.go:94] pod "coredns-66bc5c9577-xndlx" is "Ready"
	I1216 03:06:54.025718  296715 pod_ready.go:86] duration metric: took 37.505799828s for pod "coredns-66bc5c9577-xndlx" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.028581  296715 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.032600  296715 pod_ready.go:94] pod "etcd-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.032625  296715 pod_ready.go:86] duration metric: took 4.021316ms for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.034486  296715 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.038375  296715 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.038397  296715 pod_ready.go:86] duration metric: took 3.88453ms for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.042484  296715 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.223347  296715 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.223380  296715 pod_ready.go:86] duration metric: took 180.875268ms for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.423344  296715 pod_ready.go:83] waiting for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.823755  296715 pod_ready.go:94] pod "kube-proxy-2g6tn" is "Ready"
	I1216 03:06:54.823786  296715 pod_ready.go:86] duration metric: took 400.418478ms for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.023768  296715 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423515  296715 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:55.423544  296715 pod_ready.go:86] duration metric: took 399.751113ms for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423557  296715 pod_ready.go:40] duration metric: took 38.907102315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:55.468787  296715 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:55.471584  296715 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079165" cluster and "default" namespace by default
	W1216 03:06:53.290307  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:55.789926  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:56.429615  301866 node_ready.go:49] node "embed-certs-742794" is "Ready"
	I1216 03:06:56.429647  301866 node_ready.go:38] duration metric: took 10.503121729s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:56.429666  301866 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:56.429726  301866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:56.442056  301866 api_server.go:72] duration metric: took 11.022842819s to wait for apiserver process to appear ...
	I1216 03:06:56.442082  301866 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:56.442103  301866 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 03:06:56.447056  301866 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 03:06:56.448029  301866 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:56.448055  301866 api_server.go:131] duration metric: took 5.963373ms to wait for apiserver health ...
	I1216 03:06:56.448066  301866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:56.451399  301866 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:56.451426  301866 system_pods.go:61] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.451432  301866 system_pods.go:61] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.451438  301866 system_pods.go:61] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.451444  301866 system_pods.go:61] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.451448  301866 system_pods.go:61] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.451451  301866 system_pods.go:61] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.451455  301866 system_pods.go:61] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.451461  301866 system_pods.go:61] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.451468  301866 system_pods.go:74] duration metric: took 3.397556ms to wait for pod list to return data ...
	I1216 03:06:56.451480  301866 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:56.453702  301866 default_sa.go:45] found service account: "default"
	I1216 03:06:56.453730  301866 default_sa.go:55] duration metric: took 2.242699ms for default service account to be created ...
	I1216 03:06:56.453737  301866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:56.456453  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.456483  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.456491  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.456499  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.456505  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.456511  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.456517  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.456522  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.456533  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.456552  301866 retry.go:31] will retry after 190.871511ms: missing components: kube-dns
	I1216 03:06:56.652497  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.652527  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.652533  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.652539  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.652545  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.652551  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.652556  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.652561  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.652569  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.652589  301866 retry.go:31] will retry after 263.135615ms: missing components: kube-dns
	I1216 03:06:56.920090  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.920129  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.920138  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.920147  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.920153  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.920160  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.920165  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.920175  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.920188  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.920211  301866 retry.go:31] will retry after 424.081703ms: missing components: kube-dns
	I1216 03:06:57.348588  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.348624  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:57.348633  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.348641  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.348647  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.348652  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.348697  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.348727  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.348738  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:57.348759  301866 retry.go:31] will retry after 548.921416ms: missing components: kube-dns
	I1216 03:06:57.902738  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.902773  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Running
	I1216 03:06:57.902782  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.902787  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.902793  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.902799  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.902804  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.902809  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.902814  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Running
	I1216 03:06:57.902854  301866 system_pods.go:126] duration metric: took 1.449111047s to wait for k8s-apps to be running ...
	I1216 03:06:57.902864  301866 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:57.902920  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:57.918800  301866 system_svc.go:56] duration metric: took 15.925631ms WaitForService to wait for kubelet
	I1216 03:06:57.918845  301866 kubeadm.go:587] duration metric: took 12.499634394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:57.918867  301866 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:57.922077  301866 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:57.922106  301866 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:57.922129  301866 node_conditions.go:105] duration metric: took 3.256352ms to run NodePressure ...
	I1216 03:06:57.922144  301866 start.go:242] waiting for startup goroutines ...
	I1216 03:06:57.922158  301866 start.go:247] waiting for cluster config update ...
	I1216 03:06:57.922174  301866 start.go:256] writing updated cluster config ...
	I1216 03:06:57.922469  301866 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:57.928097  301866 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:57.932548  301866 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.937661  301866 pod_ready.go:94] pod "coredns-66bc5c9577-rz62v" is "Ready"
	I1216 03:06:57.937691  301866 pod_ready.go:86] duration metric: took 5.118409ms for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.940008  301866 pod_ready.go:83] waiting for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.944367  301866 pod_ready.go:94] pod "etcd-embed-certs-742794" is "Ready"
	I1216 03:06:57.944388  301866 pod_ready.go:86] duration metric: took 4.358597ms for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.946807  301866 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.952672  301866 pod_ready.go:94] pod "kube-apiserver-embed-certs-742794" is "Ready"
	I1216 03:06:57.952695  301866 pod_ready.go:86] duration metric: took 5.836334ms for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.954866  301866 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.333247  301866 pod_ready.go:94] pod "kube-controller-manager-embed-certs-742794" is "Ready"
	I1216 03:06:58.333274  301866 pod_ready.go:86] duration metric: took 378.387824ms for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.532264  301866 pod_ready.go:83] waiting for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.933597  301866 pod_ready.go:94] pod "kube-proxy-899tv" is "Ready"
	I1216 03:06:58.933622  301866 pod_ready.go:86] duration metric: took 401.335129ms for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.133550  301866 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532905  301866 pod_ready.go:94] pod "kube-scheduler-embed-certs-742794" is "Ready"
	I1216 03:06:59.532933  301866 pod_ready.go:86] duration metric: took 399.353784ms for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532945  301866 pod_ready.go:40] duration metric: took 1.604812413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.576977  301866 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:59.578834  301866 out.go:179] * Done! kubectl is now configured to use "embed-certs-742794" cluster and "default" namespace by default
	I1216 03:07:00.734146  311649 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:07:00.734241  311649 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:07:00.734336  311649 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:07:00.734445  311649 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:07:00.734513  311649 kubeadm.go:319] OS: Linux
	I1216 03:07:00.734595  311649 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:07:00.734665  311649 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:07:00.734745  311649 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:07:00.734807  311649 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:07:00.734941  311649 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:07:00.735023  311649 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:07:00.735095  311649 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:07:00.735168  311649 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:07:00.735274  311649 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:07:00.735439  311649 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:07:00.735570  311649 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:07:00.735660  311649 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:07:00.737122  311649 out.go:252]   - Generating certificates and keys ...
	I1216 03:07:00.737200  311649 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:07:00.737281  311649 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:07:00.737346  311649 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:07:00.737403  311649 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:07:00.737487  311649 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:07:00.737563  311649 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:07:00.737637  311649 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:07:00.737781  311649 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.737858  311649 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:07:00.737979  311649 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.738058  311649 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:07:00.738150  311649 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:07:00.738205  311649 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:07:00.738283  311649 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:07:00.738376  311649 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:07:00.738446  311649 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:07:00.738501  311649 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:07:00.738579  311649 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:07:00.738633  311649 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:07:00.738736  311649 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:07:00.738800  311649 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:07:00.740287  311649 out.go:252]   - Booting up control plane ...
	I1216 03:07:00.740372  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:07:00.740438  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:07:00.740524  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:07:00.740652  311649 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:07:00.740772  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:07:00.740946  311649 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:07:00.741073  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:07:00.741126  311649 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:07:00.741278  311649 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:07:00.741401  311649 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:07:00.741468  311649 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.981193ms
	I1216 03:07:00.741568  311649 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:07:00.741715  311649 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:07:00.741810  311649 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:07:00.741982  311649 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:07:00.742095  311649 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.877570449s
	I1216 03:07:00.742199  311649 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.443366435s
	I1216 03:07:00.742292  311649 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001072727s
	I1216 03:07:00.742448  311649 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:07:00.742548  311649 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:07:00.742619  311649 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:07:00.742803  311649 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:07:00.742872  311649 kubeadm.go:319] [bootstrap-token] Using token: qf8hji.ax4hpzqgdccyhdsp
	I1216 03:07:00.744251  311649 out.go:252]   - Configuring RBAC rules ...
	I1216 03:07:00.744348  311649 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:07:00.744421  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:07:00.744557  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:07:00.744689  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:07:00.744849  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:07:00.744950  311649 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:07:00.745043  311649 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:07:00.745086  311649 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:07:00.745140  311649 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:07:00.745153  311649 kubeadm.go:319] 
	I1216 03:07:00.745212  311649 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:07:00.745218  311649 kubeadm.go:319] 
	I1216 03:07:00.745298  311649 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:07:00.745308  311649 kubeadm.go:319] 
	I1216 03:07:00.745347  311649 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:07:00.745409  311649 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:07:00.745452  311649 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:07:00.745460  311649 kubeadm.go:319] 
	I1216 03:07:00.745522  311649 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:07:00.745539  311649 kubeadm.go:319] 
	I1216 03:07:00.745581  311649 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:07:00.745587  311649 kubeadm.go:319] 
	I1216 03:07:00.745630  311649 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:07:00.745694  311649 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:07:00.745766  311649 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:07:00.745773  311649 kubeadm.go:319] 
	I1216 03:07:00.745892  311649 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:07:00.745971  311649 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:07:00.745977  311649 kubeadm.go:319] 
	I1216 03:07:00.746075  311649 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746254  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:07:00.746296  311649 kubeadm.go:319] 	--control-plane 
	I1216 03:07:00.746311  311649 kubeadm.go:319] 
	I1216 03:07:00.746393  311649 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:07:00.746400  311649 kubeadm.go:319] 
	I1216 03:07:00.746491  311649 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746595  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:07:00.746611  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:07:00.748130  311649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:58.288855  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:58.790092  305678 node_ready.go:49] node "auto-646016" is "Ready"
	I1216 03:06:58.790126  305678 node_ready.go:38] duration metric: took 11.503870198s for node "auto-646016" to be "Ready" ...
	I1216 03:06:58.790140  305678 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:58.790207  305678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:58.808029  305678 api_server.go:72] duration metric: took 11.830592066s to wait for apiserver process to appear ...
	I1216 03:06:58.808059  305678 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:58.808080  305678 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:06:58.815119  305678 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:06:58.816423  305678 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:58.816504  305678 api_server.go:131] duration metric: took 8.436974ms to wait for apiserver health ...
	I1216 03:06:58.816533  305678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:58.821280  305678 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:58.821368  305678 system_pods.go:61] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.821400  305678 system_pods.go:61] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.821419  305678 system_pods.go:61] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.821439  305678 system_pods.go:61] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.821456  305678 system_pods.go:61] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.821475  305678 system_pods.go:61] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.821485  305678 system_pods.go:61] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.821492  305678 system_pods.go:61] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.821502  305678 system_pods.go:74] duration metric: took 4.950516ms to wait for pod list to return data ...
	I1216 03:06:58.821546  305678 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:58.824059  305678 default_sa.go:45] found service account: "default"
	I1216 03:06:58.824080  305678 default_sa.go:55] duration metric: took 2.522405ms for default service account to be created ...
	I1216 03:06:58.824091  305678 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:58.827274  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:58.827304  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.827312  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.827321  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.827326  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.827331  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.827341  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.827347  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.827358  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.827393  305678 retry.go:31] will retry after 259.79372ms: missing components: kube-dns
	I1216 03:06:59.091902  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.091931  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:59.091936  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.091960  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.091965  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.091971  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.091976  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.091984  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.091991  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:59.092011  305678 retry.go:31] will retry after 323.360238ms: missing components: kube-dns
	I1216 03:06:59.419712  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.419750  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Running
	I1216 03:06:59.419760  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.419766  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.419782  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.419793  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.419800  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.419815  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.419838  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Running
	I1216 03:06:59.419849  305678 system_pods.go:126] duration metric: took 595.751665ms to wait for k8s-apps to be running ...
	I1216 03:06:59.419884  305678 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:59.419987  305678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:59.433260  305678 system_svc.go:56] duration metric: took 13.390186ms WaitForService to wait for kubelet
	I1216 03:06:59.433294  305678 kubeadm.go:587] duration metric: took 12.45586268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:59.433320  305678 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:59.436233  305678 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:59.436259  305678 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:59.436274  305678 node_conditions.go:105] duration metric: took 2.942077ms to run NodePressure ...
	I1216 03:06:59.436285  305678 start.go:242] waiting for startup goroutines ...
	I1216 03:06:59.436292  305678 start.go:247] waiting for cluster config update ...
	I1216 03:06:59.436331  305678 start.go:256] writing updated cluster config ...
	I1216 03:06:59.436568  305678 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:59.440748  305678 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.444513  305678 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.448521  305678 pod_ready.go:94] pod "coredns-66bc5c9577-w7kfz" is "Ready"
	I1216 03:06:59.448540  305678 pod_ready.go:86] duration metric: took 4.002957ms for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.450549  305678 pod_ready.go:83] waiting for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.454198  305678 pod_ready.go:94] pod "etcd-auto-646016" is "Ready"
	I1216 03:06:59.454220  305678 pod_ready.go:86] duration metric: took 3.644632ms for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.456274  305678 pod_ready.go:83] waiting for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.459897  305678 pod_ready.go:94] pod "kube-apiserver-auto-646016" is "Ready"
	I1216 03:06:59.459920  305678 pod_ready.go:86] duration metric: took 3.627374ms for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.462673  305678 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.845697  305678 pod_ready.go:94] pod "kube-controller-manager-auto-646016" is "Ready"
	I1216 03:06:59.845724  305678 pod_ready.go:86] duration metric: took 383.032974ms for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.046236  305678 pod_ready.go:83] waiting for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.445412  305678 pod_ready.go:94] pod "kube-proxy-hwssz" is "Ready"
	I1216 03:07:00.445441  305678 pod_ready.go:86] duration metric: took 399.181443ms for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.645598  305678 pod_ready.go:83] waiting for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046668  305678 pod_ready.go:94] pod "kube-scheduler-auto-646016" is "Ready"
	I1216 03:07:01.046698  305678 pod_ready.go:86] duration metric: took 401.069816ms for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046714  305678 pod_ready.go:40] duration metric: took 1.605935876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:07:01.100168  305678 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:07:01.102443  305678 out.go:179] * Done! kubectl is now configured to use "auto-646016" cluster and "default" namespace by default
	I1216 03:07:00.749233  311649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:07:00.753983  311649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:07:00.753999  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:07:00.769555  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:07:00.983333  311649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:07:00.983402  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:00.983420  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-646016 minikube.k8s.io/updated_at=2025_12_16T03_07_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=kindnet-646016 minikube.k8s.io/primary=true
	I1216 03:07:00.994666  311649 ops.go:34] apiserver oom_adj: -16
	I1216 03:07:01.075445  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:01.575786  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.076390  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.575611  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.075547  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.575755  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.076148  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.575753  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.075504  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.576052  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.076085  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.148434  311649 kubeadm.go:1114] duration metric: took 5.165094916s to wait for elevateKubeSystemPrivileges
	I1216 03:07:06.148465  311649 kubeadm.go:403] duration metric: took 16.069424018s to StartCluster
	I1216 03:07:06.148481  311649 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.148539  311649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:07:06.150375  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.150605  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:07:06.150611  311649 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:07:06.150712  311649 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:07:06.150851  311649 addons.go:70] Setting storage-provisioner=true in profile "kindnet-646016"
	I1216 03:07:06.150859  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:06.150876  311649 addons.go:239] Setting addon storage-provisioner=true in "kindnet-646016"
	I1216 03:07:06.150888  311649 addons.go:70] Setting default-storageclass=true in profile "kindnet-646016"
	I1216 03:07:06.150909  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.150910  311649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-646016"
	I1216 03:07:06.151282  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.151441  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.152143  311649 out.go:179] * Verifying Kubernetes components...
	I1216 03:07:06.153565  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:07:06.176673  311649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:07:06.178156  311649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.178180  311649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:07:06.178249  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.179311  311649 addons.go:239] Setting addon default-storageclass=true in "kindnet-646016"
	I1216 03:07:06.179358  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.179811  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.206511  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.210644  311649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.210666  311649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:07:06.210723  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.239999  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.243953  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:07:06.320537  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:07:06.324892  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.358779  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.419573  311649 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 03:07:06.421111  311649 node_ready.go:35] waiting up to 15m0s for node "kindnet-646016" to be "Ready" ...
	I1216 03:07:06.616907  311649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:07:06.618138  311649 addons.go:530] duration metric: took 467.411759ms for enable addons: enabled=[storage-provisioner default-storageclass]
	
	
	==> CRI-O <==
	Dec 16 03:06:56 embed-certs-742794 crio[776]: time="2025-12-16T03:06:56.71326777Z" level=info msg="Starting container: e00e383e1766e51eedf8ff1ddd8ff8b30afbd3d6449bb787a7132a9fd4fb65e1" id=8254adac-cf8f-4766-a4d1-d3a3aa081d74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:56 embed-certs-742794 crio[776]: time="2025-12-16T03:06:56.715409578Z" level=info msg="Started container" PID=1836 containerID=e00e383e1766e51eedf8ff1ddd8ff8b30afbd3d6449bb787a7132a9fd4fb65e1 description=kube-system/coredns-66bc5c9577-rz62v/coredns id=8254adac-cf8f-4766-a4d1-d3a3aa081d74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16b88a56d92eb43435d80441714ee4f0a03d163c42b4d65c5630801a4a3f9eb2
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.038100853Z" level=info msg="Running pod sandbox: default/busybox/POD" id=70b5c7d1-8350-49d7-9600-54550acc6c96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.038181863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.042918607Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b28f59b6048f042265c80ed747baeb073d1b7099489137c3b4924304265117d8 UID:91384ee0-dd8e-4fb3-ad77-eb48d3412f6e NetNS:/var/run/netns/f1924413-30ec-46a9-899c-9f8b8ce5de33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e8c488}] Aliases:map[]}"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.042946442Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.053460434Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b28f59b6048f042265c80ed747baeb073d1b7099489137c3b4924304265117d8 UID:91384ee0-dd8e-4fb3-ad77-eb48d3412f6e NetNS:/var/run/netns/f1924413-30ec-46a9-899c-9f8b8ce5de33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e8c488}] Aliases:map[]}"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.053604156Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.055234462Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.05606436Z" level=info msg="Ran pod sandbox b28f59b6048f042265c80ed747baeb073d1b7099489137c3b4924304265117d8 with infra container: default/busybox/POD" id=70b5c7d1-8350-49d7-9600-54550acc6c96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.057350206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d14efff4-5ad3-4f93-afea-32f80ab0b285 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.05750725Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d14efff4-5ad3-4f93-afea-32f80ab0b285 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.05755308Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d14efff4-5ad3-4f93-afea-32f80ab0b285 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.05825737Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=154e8352-71a0-4e0c-bfc9-3895fac024b3 name=/runtime.v1.ImageService/PullImage
	Dec 16 03:07:00 embed-certs-742794 crio[776]: time="2025-12-16T03:07:00.061849659Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.309262549Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=154e8352-71a0-4e0c-bfc9-3895fac024b3 name=/runtime.v1.ImageService/PullImage
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.309995661Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86b72456-c77f-401b-8ce3-1de5529d78fc name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.311339042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2346c75d-7f89-4f46-b336-2748a20ca983 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.314522732Z" level=info msg="Creating container: default/busybox/busybox" id=4b2e3e01-e0d7-4f5f-aa37-5c67522010ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.314694168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.319214278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.319770571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.346528917Z" level=info msg="Created container b3f03530b30d63f9d33c60f783c64b7a643b8647504d3d4e17e322697e7398e3: default/busybox/busybox" id=4b2e3e01-e0d7-4f5f-aa37-5c67522010ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.347207306Z" level=info msg="Starting container: b3f03530b30d63f9d33c60f783c64b7a643b8647504d3d4e17e322697e7398e3" id=342db95a-a17c-42af-8914-e25318fb52b1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:07:01 embed-certs-742794 crio[776]: time="2025-12-16T03:07:01.348954539Z" level=info msg="Started container" PID=1909 containerID=b3f03530b30d63f9d33c60f783c64b7a643b8647504d3d4e17e322697e7398e3 description=default/busybox/busybox id=342db95a-a17c-42af-8914-e25318fb52b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b28f59b6048f042265c80ed747baeb073d1b7099489137c3b4924304265117d8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	b3f03530b30d6       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   b28f59b6048f0       busybox                                      default
	e00e383e1766e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   16b88a56d92eb       coredns-66bc5c9577-rz62v                     kube-system
	c25056bc642f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   b59a6e5642a7b       storage-provisioner                          kube-system
	c19e5f7a3df62       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   013ee932f1f27       kindnet-7vrj8                                kube-system
	29d8567d7cd90       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   9de17865d2af1       kube-proxy-899tv                             kube-system
	6b9b81405af96       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   6c644a685fe58       kube-controller-manager-embed-certs-742794   kube-system
	6ce29b0be2299       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   f7890611bf296       kube-apiserver-embed-certs-742794            kube-system
	746271d92d5a0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   e514475582ae2       kube-scheduler-embed-certs-742794            kube-system
	888f8db0ca754       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   592215a2a05b9       etcd-embed-certs-742794                      kube-system
	
	
	==> coredns [e00e383e1766e51eedf8ff1ddd8ff8b30afbd3d6449bb787a7132a9fd4fb65e1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46377 - 14778 "HINFO IN 5895625098316880234.3054613790502963642. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052763638s
	
	
	==> describe nodes <==
	Name:               embed-certs-742794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-742794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=embed-certs-742794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:06:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-742794
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:07:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:06:56 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:06:56 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:06:56 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:06:56 +0000   Tue, 16 Dec 2025 03:06:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-742794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                227aaafb-25e6-44ee-81ce-b7feaed19af9
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-rz62v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-742794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-7vrj8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-742794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-742794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-899tv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-742794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-742794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-742794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-742794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-742794 event: Registered Node embed-certs-742794 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-742794 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [888f8db0ca7540f2ba4a2ac6a8ebbe770439ffcf9272ee7418106b0f295afd00] <==
	{"level":"info","ts":"2025-12-16T03:06:41.116741Z","caller":"traceutil/trace.go:172","msg":"trace[1551992532] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-742794; range_end:; response_count:1; response_revision:263; }","duration":"343.148344ms","start":"2025-12-16T03:06:40.773586Z","end":"2025-12-16T03:06:41.116734Z","steps":["trace[1551992532] 'agreement among raft nodes before linearized reading'  (duration: 342.991135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:41.116760Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:40.773568Z","time spent":"343.185649ms","remote":"127.0.0.1:38192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":1,"response size":5885,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-742794\" limit:1 "}
	{"level":"warn","ts":"2025-12-16T03:06:41.116981Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:40.789119Z","time spent":"327.624425ms","remote":"127.0.0.1:38192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5846,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-embed-certs-742794\" mod_revision:240 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-embed-certs-742794\" value_size:5788 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-embed-certs-742794\" > >"}
	{"level":"warn","ts":"2025-12-16T03:06:41.134436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.133476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-742794\" limit:1 ","response":"range_response_count:1 size:3389"}
	{"level":"warn","ts":"2025-12-16T03:06:41.134460Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.455994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-742794\" limit:1 ","response":"range_response_count:1 size:6299"}
	{"level":"warn","ts":"2025-12-16T03:06:41.134460Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.514203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T03:06:41.134493Z","caller":"traceutil/trace.go:172","msg":"trace[1138642993] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-742794; range_end:; response_count:1; response_revision:264; }","duration":"137.494509ms","start":"2025-12-16T03:06:40.996991Z","end":"2025-12-16T03:06:41.134485Z","steps":["trace[1138642993] 'agreement among raft nodes before linearized reading'  (duration: 137.363086ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.134512Z","caller":"traceutil/trace.go:172","msg":"trace[1207429719] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:264; }","duration":"137.569607ms","start":"2025-12-16T03:06:40.996934Z","end":"2025-12-16T03:06:41.134503Z","steps":["trace[1207429719] 'agreement among raft nodes before linearized reading'  (duration: 137.460309ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.134532Z","caller":"traceutil/trace.go:172","msg":"trace[827397674] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"135.712935ms","start":"2025-12-16T03:06:40.998809Z","end":"2025-12-16T03:06:41.134522Z","steps":["trace[827397674] 'process raft request'  (duration: 135.637364ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:41.134460Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.266366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-742794\" limit:1 ","response":"range_response_count:1 size:5861"}
	{"level":"info","ts":"2025-12-16T03:06:41.134607Z","caller":"traceutil/trace.go:172","msg":"trace[1782077910] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-742794; range_end:; response_count:1; response_revision:264; }","duration":"137.43629ms","start":"2025-12-16T03:06:40.997162Z","end":"2025-12-16T03:06:41.134598Z","steps":["trace[1782077910] 'agreement among raft nodes before linearized reading'  (duration: 137.1846ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.134482Z","caller":"traceutil/trace.go:172","msg":"trace[1114868481] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-742794; range_end:; response_count:1; response_revision:264; }","duration":"137.185631ms","start":"2025-12-16T03:06:40.997286Z","end":"2025-12-16T03:06:41.134471Z","steps":["trace[1114868481] 'agreement among raft nodes before linearized reading'  (duration: 137.051111ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:41.349468Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.96139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-12-16T03:06:41.349532Z","caller":"traceutil/trace.go:172","msg":"trace[1552772923] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:269; }","duration":"149.037081ms","start":"2025-12-16T03:06:41.200483Z","end":"2025-12-16T03:06:41.349521Z","steps":["trace[1552772923] 'range keys from in-memory index tree'  (duration: 148.902242ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.349568Z","caller":"traceutil/trace.go:172","msg":"trace[512426240] transaction","detail":"{read_only:false; response_revision:270; number_of_response:1; }","duration":"148.868572ms","start":"2025-12-16T03:06:41.200666Z","end":"2025-12-16T03:06:41.349535Z","steps":["trace[512426240] 'process raft request'  (duration: 147.604874ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.374416Z","caller":"traceutil/trace.go:172","msg":"trace[1994837711] transaction","detail":"{read_only:false; response_revision:271; number_of_response:1; }","duration":"171.005878ms","start":"2025-12-16T03:06:41.203381Z","end":"2025-12-16T03:06:41.374387Z","steps":["trace[1994837711] 'process raft request'  (duration: 170.778408ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:41.600784Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.359847ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790710839917454 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-16T03:06:41.601291Z","caller":"traceutil/trace.go:172","msg":"trace[1182229661] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"220.079993ms","start":"2025-12-16T03:06:41.381163Z","end":"2025-12-16T03:06:41.601243Z","steps":["trace[1182229661] 'process raft request'  (duration: 77.115921ms)","trace[1182229661] 'compare'  (duration: 141.18313ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:06:41.601370Z","caller":"traceutil/trace.go:172","msg":"trace[731712733] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"217.228764ms","start":"2025-12-16T03:06:41.384130Z","end":"2025-12-16T03:06:41.601358Z","steps":["trace[731712733] 'process raft request'  (duration: 217.178362ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.602394Z","caller":"traceutil/trace.go:172","msg":"trace[1989255076] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"218.785333ms","start":"2025-12-16T03:06:41.383585Z","end":"2025-12-16T03:06:41.602370Z","steps":["trace[1989255076] 'process raft request'  (duration: 217.520685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:41.827906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.703684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-742794\" limit:1 ","response":"range_response_count:1 size:4859"}
	{"level":"info","ts":"2025-12-16T03:06:41.827982Z","caller":"traceutil/trace.go:172","msg":"trace[12240081] range","detail":"{range_begin:/registry/minions/embed-certs-742794; range_end:; response_count:1; response_revision:274; }","duration":"136.789443ms","start":"2025-12-16T03:06:41.691175Z","end":"2025-12-16T03:06:41.827964Z","steps":["trace[12240081] 'agreement among raft nodes before linearized reading'  (duration: 53.451997ms)","trace[12240081] 'range keys from in-memory index tree'  (duration: 83.123549ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:06:41.827992Z","caller":"traceutil/trace.go:172","msg":"trace[461741838] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"208.265348ms","start":"2025-12-16T03:06:41.619708Z","end":"2025-12-16T03:06:41.827974Z","steps":["trace[461741838] 'process raft request'  (duration: 124.898747ms)","trace[461741838] 'compare'  (duration: 83.166626ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:06:41.836639Z","caller":"traceutil/trace.go:172","msg":"trace[1981062151] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"215.499246ms","start":"2025-12-16T03:06:41.621121Z","end":"2025-12-16T03:06:41.836621Z","steps":["trace[1981062151] 'process raft request'  (duration: 215.375096ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.836766Z","caller":"traceutil/trace.go:172","msg":"trace[1643711522] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"147.241314ms","start":"2025-12-16T03:06:41.689512Z","end":"2025-12-16T03:06:41.836754Z","steps":["trace[1643711522] 'process raft request'  (duration: 147.083321ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:07:08 up 49 min,  0 user,  load average: 3.88, 3.23, 2.10
	Linux embed-certs-742794 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c19e5f7a3df62eea3e53c2672dcd3a3bc46210e2f081a564574697829bd01ec8] <==
	I1216 03:06:45.901661       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:45.901996       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:06:45.902157       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:45.902182       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:45.902209       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:46.104113       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:46.104156       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:46.104167       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:46.104307       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:46.504773       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:46.504800       1 metrics.go:72] Registering metrics
	I1216 03:06:46.504881       1 controller.go:711] "Syncing nftables rules"
	I1216 03:06:56.107924       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:06:56.107982       1 main.go:301] handling current node
	I1216 03:07:06.106934       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:07:06.107003       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ce29b0be229915a81c33a57143b61dd83f4adfd7773b3e17894449d3a2929b4] <==
	I1216 03:06:36.839131       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:06:36.842554       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 03:06:36.842632       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:06:36.843072       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:06:36.850660       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:06:36.873073       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:37.740179       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 03:06:37.744375       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 03:06:37.744404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:06:38.407220       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:06:38.450117       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:06:38.549469       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 03:06:38.564886       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1216 03:06:38.566364       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:06:38.571780       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:06:39.337605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:06:39.784290       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:06:39.797077       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 03:06:39.806596       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:06:45.090456       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 03:06:45.090456       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 03:06:45.342782       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:45.347118       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:45.390509       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1216 03:07:06.822607       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:54244: use of closed network connection
	
	
	==> kube-controller-manager [6b9b81405af969704d66bde1c34dd32b50baace637e8d1083c1796598f8b4e2e] <==
	I1216 03:06:44.336503       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 03:06:44.336557       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 03:06:44.336566       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 03:06:44.336595       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 03:06:44.337799       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 03:06:44.337861       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 03:06:44.337861       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 03:06:44.337912       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 03:06:44.337923       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:06:44.338183       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 03:06:44.338348       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 03:06:44.339196       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:06:44.340337       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 03:06:44.341456       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:06:44.341516       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:06:44.341559       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:06:44.341571       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:06:44.341578       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:06:44.342554       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:06:44.346780       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:06:44.348092       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-742794" podCIDRs=["10.244.0.0/24"]
	I1216 03:06:44.352206       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 03:06:44.353462       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 03:06:44.359844       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:06:59.288202       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29d8567d7cd901c9bb8535a20817c1784482395aaa26fad3b1312365cfa821d8] <==
	I1216 03:06:45.709769       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:45.780426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:06:45.881221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:06:45.881280       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 03:06:45.881392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:45.905644       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:45.905711       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:06:45.912104       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:45.912554       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:06:45.912589       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:45.914908       1 config.go:200] "Starting service config controller"
	I1216 03:06:45.914946       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:45.915080       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:45.915093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:45.915168       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:45.915174       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:45.918461       1 config.go:309] "Starting node config controller"
	I1216 03:06:45.918485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:45.918494       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:46.015032       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:06:46.015155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:06:46.015259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [746271d92d5a00d20251500619580f2a9fa1a5d0bab9068308a5f6c18acea6ac] <==
	E1216 03:06:36.808814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 03:06:36.809023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 03:06:36.809699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 03:06:36.809810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 03:06:36.810025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 03:06:36.810025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 03:06:36.810068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 03:06:36.810097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 03:06:36.810281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 03:06:36.810320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 03:06:37.629410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 03:06:37.643159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 03:06:37.732909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 03:06:37.760284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 03:06:37.788608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 03:06:37.827138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 03:06:37.858984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 03:06:37.863257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 03:06:37.878893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 03:06:37.896494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 03:06:38.043330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 03:06:38.085264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 03:06:38.144976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 03:06:38.192487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1216 03:06:40.602324       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:06:41 embed-certs-742794 kubelet[1320]: I1216 03:06:41.196913    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-742794" podStartSLOduration=2.196887776 podStartE2EDuration="2.196887776s" podCreationTimestamp="2025-12-16 03:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:41.120088222 +0000 UTC m=+1.587439665" watchObservedRunningTime="2025-12-16 03:06:41.196887776 +0000 UTC m=+1.664239220"
	Dec 16 03:06:41 embed-certs-742794 kubelet[1320]: I1216 03:06:41.376928    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-742794" podStartSLOduration=2.376900267 podStartE2EDuration="2.376900267s" podCreationTimestamp="2025-12-16 03:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:41.197116767 +0000 UTC m=+1.664468201" watchObservedRunningTime="2025-12-16 03:06:41.376900267 +0000 UTC m=+1.844251711"
	Dec 16 03:06:41 embed-certs-742794 kubelet[1320]: I1216 03:06:41.377079    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-742794" podStartSLOduration=2.3770717230000002 podStartE2EDuration="2.377071723s" podCreationTimestamp="2025-12-16 03:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:41.376151155 +0000 UTC m=+1.843502600" watchObservedRunningTime="2025-12-16 03:06:41.377071723 +0000 UTC m=+1.844423163"
	Dec 16 03:06:41 embed-certs-742794 kubelet[1320]: I1216 03:06:41.840067    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-742794" podStartSLOduration=2.840042273 podStartE2EDuration="2.840042273s" podCreationTimestamp="2025-12-16 03:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:41.612680048 +0000 UTC m=+2.080031491" watchObservedRunningTime="2025-12-16 03:06:41.840042273 +0000 UTC m=+2.307393718"
	Dec 16 03:06:44 embed-certs-742794 kubelet[1320]: I1216 03:06:44.362759    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 03:06:44 embed-certs-742794 kubelet[1320]: I1216 03:06:44.363563    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148314    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7d52f247-e00a-4271-86d6-a86423271e2c-cni-cfg\") pod \"kindnet-7vrj8\" (UID: \"7d52f247-e00a-4271-86d6-a86423271e2c\") " pod="kube-system/kindnet-7vrj8"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148348    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d52f247-e00a-4271-86d6-a86423271e2c-xtables-lock\") pod \"kindnet-7vrj8\" (UID: \"7d52f247-e00a-4271-86d6-a86423271e2c\") " pod="kube-system/kindnet-7vrj8"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148369    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4mb\" (UniqueName: \"kubernetes.io/projected/7d52f247-e00a-4271-86d6-a86423271e2c-kube-api-access-cq4mb\") pod \"kindnet-7vrj8\" (UID: \"7d52f247-e00a-4271-86d6-a86423271e2c\") " pod="kube-system/kindnet-7vrj8"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148385    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6750b5a-5904-46bb-bf98-7de6de239ee1-kube-proxy\") pod \"kube-proxy-899tv\" (UID: \"b6750b5a-5904-46bb-bf98-7de6de239ee1\") " pod="kube-system/kube-proxy-899tv"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148398    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6750b5a-5904-46bb-bf98-7de6de239ee1-lib-modules\") pod \"kube-proxy-899tv\" (UID: \"b6750b5a-5904-46bb-bf98-7de6de239ee1\") " pod="kube-system/kube-proxy-899tv"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148448    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6750b5a-5904-46bb-bf98-7de6de239ee1-xtables-lock\") pod \"kube-proxy-899tv\" (UID: \"b6750b5a-5904-46bb-bf98-7de6de239ee1\") " pod="kube-system/kube-proxy-899tv"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148467    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d52f247-e00a-4271-86d6-a86423271e2c-lib-modules\") pod \"kindnet-7vrj8\" (UID: \"7d52f247-e00a-4271-86d6-a86423271e2c\") " pod="kube-system/kindnet-7vrj8"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.148483    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfm8\" (UniqueName: \"kubernetes.io/projected/b6750b5a-5904-46bb-bf98-7de6de239ee1-kube-api-access-4rfm8\") pod \"kube-proxy-899tv\" (UID: \"b6750b5a-5904-46bb-bf98-7de6de239ee1\") " pod="kube-system/kube-proxy-899tv"
	Dec 16 03:06:45 embed-certs-742794 kubelet[1320]: I1216 03:06:45.691718    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7vrj8" podStartSLOduration=0.691692029 podStartE2EDuration="691.692029ms" podCreationTimestamp="2025-12-16 03:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:45.676696495 +0000 UTC m=+6.144047938" watchObservedRunningTime="2025-12-16 03:06:45.691692029 +0000 UTC m=+6.159043472"
	Dec 16 03:06:46 embed-certs-742794 kubelet[1320]: I1216 03:06:46.675485    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-899tv" podStartSLOduration=1.675462743 podStartE2EDuration="1.675462743s" podCreationTimestamp="2025-12-16 03:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:46.675318715 +0000 UTC m=+7.142670158" watchObservedRunningTime="2025-12-16 03:06:46.675462743 +0000 UTC m=+7.142814186"
	Dec 16 03:06:56 embed-certs-742794 kubelet[1320]: I1216 03:06:56.327132    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 03:06:56 embed-certs-742794 kubelet[1320]: I1216 03:06:56.433034    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3431f40-12b9-40af-b117-2d33d57e2306-config-volume\") pod \"coredns-66bc5c9577-rz62v\" (UID: \"b3431f40-12b9-40af-b117-2d33d57e2306\") " pod="kube-system/coredns-66bc5c9577-rz62v"
	Dec 16 03:06:56 embed-certs-742794 kubelet[1320]: I1216 03:06:56.433088    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c4b740db-5b49-4331-ad97-1e4ba4180f9e-tmp\") pod \"storage-provisioner\" (UID: \"c4b740db-5b49-4331-ad97-1e4ba4180f9e\") " pod="kube-system/storage-provisioner"
	Dec 16 03:06:56 embed-certs-742794 kubelet[1320]: I1216 03:06:56.433119    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgjd\" (UniqueName: \"kubernetes.io/projected/b3431f40-12b9-40af-b117-2d33d57e2306-kube-api-access-wmgjd\") pod \"coredns-66bc5c9577-rz62v\" (UID: \"b3431f40-12b9-40af-b117-2d33d57e2306\") " pod="kube-system/coredns-66bc5c9577-rz62v"
	Dec 16 03:06:56 embed-certs-742794 kubelet[1320]: I1216 03:06:56.433143    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nl55\" (UniqueName: \"kubernetes.io/projected/c4b740db-5b49-4331-ad97-1e4ba4180f9e-kube-api-access-5nl55\") pod \"storage-provisioner\" (UID: \"c4b740db-5b49-4331-ad97-1e4ba4180f9e\") " pod="kube-system/storage-provisioner"
	Dec 16 03:06:57 embed-certs-742794 kubelet[1320]: I1216 03:06:57.711209    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.711183505 podStartE2EDuration="12.711183505s" podCreationTimestamp="2025-12-16 03:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:57.710564311 +0000 UTC m=+18.177915756" watchObservedRunningTime="2025-12-16 03:06:57.711183505 +0000 UTC m=+18.178534949"
	Dec 16 03:06:59 embed-certs-742794 kubelet[1320]: I1216 03:06:59.731101    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rz62v" podStartSLOduration=14.73107276 podStartE2EDuration="14.73107276s" podCreationTimestamp="2025-12-16 03:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:06:57.726369455 +0000 UTC m=+18.193720912" watchObservedRunningTime="2025-12-16 03:06:59.73107276 +0000 UTC m=+20.198424203"
	Dec 16 03:06:59 embed-certs-742794 kubelet[1320]: I1216 03:06:59.756196    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxhn\" (UniqueName: \"kubernetes.io/projected/91384ee0-dd8e-4fb3-ad77-eb48d3412f6e-kube-api-access-ckxhn\") pod \"busybox\" (UID: \"91384ee0-dd8e-4fb3-ad77-eb48d3412f6e\") " pod="default/busybox"
	Dec 16 03:07:01 embed-certs-742794 kubelet[1320]: I1216 03:07:01.713729    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.460712027 podStartE2EDuration="2.713712313s" podCreationTimestamp="2025-12-16 03:06:59 +0000 UTC" firstStartedPulling="2025-12-16 03:07:00.057857733 +0000 UTC m=+20.525209168" lastFinishedPulling="2025-12-16 03:07:01.310858015 +0000 UTC m=+21.778209454" observedRunningTime="2025-12-16 03:07:01.713430007 +0000 UTC m=+22.180781451" watchObservedRunningTime="2025-12-16 03:07:01.713712313 +0000 UTC m=+22.181063755"
	
	
	==> storage-provisioner [c25056bc642f59dcec3f0f5333ad78aaeb4e84cb5d5682c4c4c2c2f02f01ad1b] <==
	I1216 03:06:56.719109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:06:56.727885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:06:56.727999       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:06:56.730318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:56.735936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:56.736149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:06:56.736292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcdc7c73-4d43-45a4-8fda-ffef275cc1fa", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-742794_79957077-ba93-4bc5-9f44-1a2761f03704 became leader
	I1216 03:06:56.736379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_79957077-ba93-4bc5-9f44-1a2761f03704!
	W1216 03:06:56.740684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:56.746271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:06:56.837435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_79957077-ba93-4bc5-9f44-1a2761f03704!
	W1216 03:06:58.750231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:58.756315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:00.760394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:00.766120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:02.769553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:02.777093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:04.780595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:04.784368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:06.788996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:06.793324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-742794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-079165 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-079165 --alsologtostderr -v=1: exit status 80 (2.435776079s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-079165 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:07:07.259693  316553 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:07:07.259989  316553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:07.260001  316553 out.go:374] Setting ErrFile to fd 2...
	I1216 03:07:07.260006  316553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:07.260287  316553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:07:07.260583  316553 out.go:368] Setting JSON to false
	I1216 03:07:07.260604  316553 mustload.go:66] Loading cluster: default-k8s-diff-port-079165
	I1216 03:07:07.261028  316553 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:07.261515  316553 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-079165 --format={{.State.Status}}
	I1216 03:07:07.283365  316553 host.go:66] Checking if "default-k8s-diff-port-079165" exists ...
	I1216 03:07:07.283649  316553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:07:07.343093  316553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-16 03:07:07.332878754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:07:07.343901  316553 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-079165 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 03:07:07.345835  316553 out.go:179] * Pausing node default-k8s-diff-port-079165 ... 
	I1216 03:07:07.347055  316553 host.go:66] Checking if "default-k8s-diff-port-079165" exists ...
	I1216 03:07:07.347322  316553 ssh_runner.go:195] Run: systemctl --version
	I1216 03:07:07.347360  316553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-079165
	I1216 03:07:07.365949  316553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/default-k8s-diff-port-079165/id_rsa Username:docker}
	I1216 03:07:07.466753  316553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:07:07.488762  316553 pause.go:52] kubelet running: true
	I1216 03:07:07.488861  316553 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:07:07.675182  316553 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:07:07.675282  316553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:07:07.752946  316553 cri.go:89] found id: "6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9"
	I1216 03:07:07.752985  316553 cri.go:89] found id: "b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2"
	I1216 03:07:07.752990  316553 cri.go:89] found id: "e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	I1216 03:07:07.752994  316553 cri.go:89] found id: "670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9"
	I1216 03:07:07.752997  316553 cri.go:89] found id: "07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604"
	I1216 03:07:07.753002  316553 cri.go:89] found id: "7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301"
	I1216 03:07:07.753004  316553 cri.go:89] found id: "8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055"
	I1216 03:07:07.753007  316553 cri.go:89] found id: "f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09"
	I1216 03:07:07.753010  316553 cri.go:89] found id: "9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f"
	I1216 03:07:07.753026  316553 cri.go:89] found id: "9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	I1216 03:07:07.753031  316553 cri.go:89] found id: "7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178"
	I1216 03:07:07.753034  316553 cri.go:89] found id: ""
	I1216 03:07:07.753074  316553 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:07:07.764977  316553 retry.go:31] will retry after 335.161397ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:07Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:07:08.100304  316553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:07:08.113420  316553 pause.go:52] kubelet running: false
	I1216 03:07:08.113476  316553 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:07:08.272812  316553 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:07:08.272937  316553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:07:08.349043  316553 cri.go:89] found id: "6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9"
	I1216 03:07:08.349071  316553 cri.go:89] found id: "b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2"
	I1216 03:07:08.349077  316553 cri.go:89] found id: "e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	I1216 03:07:08.349082  316553 cri.go:89] found id: "670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9"
	I1216 03:07:08.349086  316553 cri.go:89] found id: "07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604"
	I1216 03:07:08.349091  316553 cri.go:89] found id: "7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301"
	I1216 03:07:08.349096  316553 cri.go:89] found id: "8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055"
	I1216 03:07:08.349099  316553 cri.go:89] found id: "f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09"
	I1216 03:07:08.349105  316553 cri.go:89] found id: "9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f"
	I1216 03:07:08.349125  316553 cri.go:89] found id: "9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	I1216 03:07:08.349134  316553 cri.go:89] found id: "7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178"
	I1216 03:07:08.349139  316553 cri.go:89] found id: ""
	I1216 03:07:08.349185  316553 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:07:08.360960  316553 retry.go:31] will retry after 359.630766ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:07:08.721554  316553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:07:08.736216  316553 pause.go:52] kubelet running: false
	I1216 03:07:08.736283  316553 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:07:08.892192  316553 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:07:08.892271  316553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:07:08.968691  316553 cri.go:89] found id: "6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9"
	I1216 03:07:08.968714  316553 cri.go:89] found id: "b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2"
	I1216 03:07:08.968718  316553 cri.go:89] found id: "e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	I1216 03:07:08.968722  316553 cri.go:89] found id: "670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9"
	I1216 03:07:08.968725  316553 cri.go:89] found id: "07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604"
	I1216 03:07:08.968728  316553 cri.go:89] found id: "7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301"
	I1216 03:07:08.968731  316553 cri.go:89] found id: "8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055"
	I1216 03:07:08.968733  316553 cri.go:89] found id: "f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09"
	I1216 03:07:08.968736  316553 cri.go:89] found id: "9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f"
	I1216 03:07:08.968754  316553 cri.go:89] found id: "9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	I1216 03:07:08.968759  316553 cri.go:89] found id: "7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178"
	I1216 03:07:08.968763  316553 cri.go:89] found id: ""
	I1216 03:07:08.968810  316553 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:07:08.981553  316553 retry.go:31] will retry after 406.014028ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:07:09.388024  316553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:07:09.401313  316553 pause.go:52] kubelet running: false
	I1216 03:07:09.401457  316553 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:07:09.536355  316553 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:07:09.536430  316553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:07:09.605104  316553 cri.go:89] found id: "6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9"
	I1216 03:07:09.605129  316553 cri.go:89] found id: "b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2"
	I1216 03:07:09.605135  316553 cri.go:89] found id: "e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	I1216 03:07:09.605141  316553 cri.go:89] found id: "670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9"
	I1216 03:07:09.605145  316553 cri.go:89] found id: "07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604"
	I1216 03:07:09.605151  316553 cri.go:89] found id: "7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301"
	I1216 03:07:09.605155  316553 cri.go:89] found id: "8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055"
	I1216 03:07:09.605159  316553 cri.go:89] found id: "f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09"
	I1216 03:07:09.605163  316553 cri.go:89] found id: "9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f"
	I1216 03:07:09.605172  316553 cri.go:89] found id: "9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	I1216 03:07:09.605177  316553 cri.go:89] found id: "7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178"
	I1216 03:07:09.605182  316553 cri.go:89] found id: ""
	I1216 03:07:09.605235  316553 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:07:09.620567  316553 out.go:203] 
	W1216 03:07:09.621905  316553 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:07:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 03:07:09.621922  316553 out.go:285] * 
	* 
	W1216 03:07:09.625783  316553 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:07:09.627153  316553 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-079165 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079165
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079165:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	        "Created": "2025-12-16T03:05:00.382441166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:06:05.931740053Z",
	            "FinishedAt": "2025-12-16T03:06:04.95986793Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hosts",
	        "LogPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7-json.log",
	        "Name": "/default-k8s-diff-port-079165",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079165:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079165",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	                "LowerDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079165",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079165/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079165",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f98c8ad465e96a5bb94e95c9a2dab8d58d3b7fcd070abbb2ca5340ebba9f0dae",
	            "SandboxKey": "/var/run/docker/netns/f98c8ad465e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079165": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5282d64d27b5a2514f04f90d1cd32aa132a110f71ffb368ba477ac385094fbb2",
	                    "EndpointID": "183aa249c553afc117a658f69c4fef51b4216f3c39683119791d3664a723a257",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5e:b5:ae:a8:cf:b1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079165",
	                        "17c3b6c10d0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165: exit status 2 (347.672822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25: (1.161000068s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:07 UTC │
	│ image   │ newest-cni-991316 image list --format=json                                                                                                                                                                                                           │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p newest-cni-991316 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p kindnet-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-646016               │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ ssh     │ -p auto-646016 pgrep -a kubelet                                                                                                                                                                                                                      │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ image   │ default-k8s-diff-port-079165 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ pause   │ -p default-k8s-diff-port-079165 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ stop    │ -p embed-certs-742794 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:36.912506  311649 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:36.912641  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912649  311649 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:36.912656  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912959  311649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:36.914248  311649 out.go:368] Setting JSON to false
	I1216 03:06:36.915985  311649 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2949,"bootTime":1765851448,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:36.916062  311649 start.go:143] virtualization: kvm guest
	I1216 03:06:36.918316  311649 out.go:179] * [kindnet-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:36.921321  311649 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:36.921324  311649 notify.go:221] Checking for updates...
	I1216 03:06:36.926057  311649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:36.934596  311649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:36.937150  311649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:36.938685  311649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:36.940325  311649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:36.943016  311649 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943170  311649 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943308  311649 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943452  311649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:36.974200  311649 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:36.974308  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.059528  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.045598159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.059688  311649 docker.go:319] overlay module found
	I1216 03:06:37.062324  311649 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:37.064270  311649 start.go:309] selected driver: docker
	I1216 03:06:37.064290  311649 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:37.064306  311649 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:37.065092  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.134587  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.120781191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.134868  311649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:37.135202  311649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:37.139979  311649 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:37.141298  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:37.141320  311649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:37.141420  311649 start.go:353] cluster config:
	{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:37.142847  311649 out.go:179] * Starting "kindnet-646016" primary control-plane node in "kindnet-646016" cluster
	I1216 03:06:37.144033  311649 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:37.145214  311649 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:37.146273  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.146323  311649 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:37.146332  311649 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:37.146381  311649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:37.146438  311649 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:37.146451  311649 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:37.146582  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:37.146609  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json: {Name:mka01fc2d87dd258e9e4215769fc0defca835ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:37.173960  311649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:37.174000  311649 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:37.174018  311649 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:37.174056  311649 start.go:360] acquireMachinesLock for kindnet-646016: {Name:mk5e982439fb31b21f2bf0f14b638469610e2ecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:37.174175  311649 start.go:364] duration metric: took 97.838µs to acquireMachinesLock for "kindnet-646016"
	I1216 03:06:37.174206  311649 start.go:93] Provisioning new machine with config: &{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:37.174307  311649 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:32.289938  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.297659  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:32.306317  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310169  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310225  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.358310  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:32.366800  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:32.374925  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.382691  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:32.390401  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394611  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394661  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.433920  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:32.442904  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:32.452551  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.460567  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:32.468254  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472142  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472194  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.512960  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.521828  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.531306  305678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:32.535264  305678 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:32.535327  305678 kubeadm.go:401] StartCluster: {Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:32.535422  305678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:32.535487  305678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:32.570545  305678 cri.go:89] found id: ""
	I1216 03:06:32.570617  305678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:32.580361  305678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:32.590036  305678 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:32.590101  305678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:32.600310  305678 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:32.600328  305678 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:32.600380  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:32.611364  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:32.611434  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:32.621528  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:32.630592  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:32.630691  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:32.639135  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.647615  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:32.647672  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.655556  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:32.663704  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:32.663751  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:32.671103  305678 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:32.732749  305678 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:32.798205  305678 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:36.811045  301866 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.343509782s
	I1216 03:06:37.324341  301866 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.856445935s
	I1216 03:06:38.970006  301866 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502495893s
	I1216 03:06:38.987567  301866 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:38.999896  301866 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:39.008632  301866 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:39.008951  301866 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-742794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:39.018346  301866 kubeadm.go:319] [bootstrap-token] Using token: jt3t6c.ftosdk62dr4hq8nx
	I1216 03:06:39.020229  301866 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:39.020406  301866 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:39.023717  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:39.030138  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:39.032812  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:39.035589  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:39.040407  301866 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:39.376310  301866 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:39.798064  301866 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:40.387055  301866 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:40.388094  301866 kubeadm.go:319] 
	I1216 03:06:40.388196  301866 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:40.388227  301866 kubeadm.go:319] 
	I1216 03:06:40.388343  301866 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:40.388356  301866 kubeadm.go:319] 
	I1216 03:06:40.388385  301866 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:40.388525  301866 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:40.388619  301866 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:40.388630  301866 kubeadm.go:319] 
	I1216 03:06:40.388735  301866 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:40.388751  301866 kubeadm.go:319] 
	I1216 03:06:40.388846  301866 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:40.388859  301866 kubeadm.go:319] 
	I1216 03:06:40.388922  301866 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:40.388986  301866 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:40.389039  301866 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:40.389047  301866 kubeadm.go:319] 
	I1216 03:06:40.389159  301866 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:40.389224  301866 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:40.389230  301866 kubeadm.go:319] 
	I1216 03:06:40.389294  301866 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389377  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:40.389395  301866 kubeadm.go:319] 	--control-plane 
	I1216 03:06:40.389400  301866 kubeadm.go:319] 
	I1216 03:06:40.389478  301866 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:40.389487  301866 kubeadm.go:319] 
	I1216 03:06:40.389595  301866 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389778  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:40.392758  301866 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:40.392974  301866 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:40.393004  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:40.393011  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:40.488426  301866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:37.030102  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:39.526744  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:37.176299  311649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:06:37.176572  311649 start.go:159] libmachine.API.Create for "kindnet-646016" (driver="docker")
	I1216 03:06:37.176609  311649 client.go:173] LocalClient.Create starting
	I1216 03:06:37.176683  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:06:37.176734  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176758  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.176868  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:06:37.176934  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176955  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.177346  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:06:37.198035  311649 cli_runner.go:211] docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:06:37.198117  311649 network_create.go:284] running [docker network inspect kindnet-646016] to gather additional debugging logs...
	I1216 03:06:37.198140  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016
	W1216 03:06:37.217351  311649 cli_runner.go:211] docker network inspect kindnet-646016 returned with exit code 1
	I1216 03:06:37.217385  311649 network_create.go:287] error running [docker network inspect kindnet-646016]: docker network inspect kindnet-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-646016 not found
	I1216 03:06:37.217404  311649 network_create.go:289] output of [docker network inspect kindnet-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-646016 not found
	
	** /stderr **
	I1216 03:06:37.217553  311649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:37.239137  311649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:06:37.240088  311649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:06:37.241036  311649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:06:37.242047  311649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7bbc0}
	I1216 03:06:37.242076  311649 network_create.go:124] attempt to create docker network kindnet-646016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:06:37.242129  311649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-646016 kindnet-646016
	I1216 03:06:37.303813  311649 network_create.go:108] docker network kindnet-646016 192.168.76.0/24 created
	I1216 03:06:37.303878  311649 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-646016" container
	I1216 03:06:37.303960  311649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:06:37.326233  311649 cli_runner.go:164] Run: docker volume create kindnet-646016 --label name.minikube.sigs.k8s.io=kindnet-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:06:37.345781  311649 oci.go:103] Successfully created a docker volume kindnet-646016
	I1216 03:06:37.345884  311649 cli_runner.go:164] Run: docker run --rm --name kindnet-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --entrypoint /usr/bin/test -v kindnet-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:06:37.826587  311649 oci.go:107] Successfully prepared a docker volume kindnet-646016
	I1216 03:06:37.826662  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.826680  311649 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:06:37.826753  311649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:06:42.492370  305678 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:06:42.492457  305678 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:06:42.492585  305678 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:06:42.492655  305678 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:06:42.492702  305678 kubeadm.go:319] OS: Linux
	I1216 03:06:42.492792  305678 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:06:42.492885  305678 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:06:42.492953  305678 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:06:42.493065  305678 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:06:42.493139  305678 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:06:42.493206  305678 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:06:42.493274  305678 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:06:42.493336  305678 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:06:42.493440  305678 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:06:42.493521  305678 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:06:42.493648  305678 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:06:42.493769  305678 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:06:42.494971  305678 out.go:252]   - Generating certificates and keys ...
	I1216 03:06:42.495073  305678 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:06:42.495136  305678 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:06:42.495239  305678 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:06:42.495320  305678 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:06:42.495390  305678 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:06:42.495471  305678 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:06:42.495555  305678 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:06:42.495710  305678 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.495789  305678 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:06:42.495956  305678 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.496049  305678 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:06:42.496141  305678 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:06:42.496209  305678 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:06:42.496297  305678 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:06:42.496386  305678 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:06:42.496480  305678 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:06:42.496551  305678 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:06:42.496644  305678 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:06:42.496722  305678 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:06:42.496861  305678 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:06:42.496960  305678 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:06:42.498424  305678 out.go:252]   - Booting up control plane ...
	I1216 03:06:42.498537  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:06:42.498665  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:06:42.498728  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:06:42.498847  305678 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:06:42.498988  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:06:42.499152  305678 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:06:42.499290  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:06:42.499345  305678 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:06:42.499657  305678 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:06:42.499788  305678 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:06:42.499885  305678 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.112324ms
	I1216 03:06:42.500041  305678 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:06:42.500173  305678 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 03:06:42.500323  305678 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:06:42.500442  305678 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:06:42.500546  305678 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.175318386s
	I1216 03:06:42.500649  305678 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.4004376s
	I1216 03:06:42.500732  305678 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501249222s
	I1216 03:06:42.500884  305678 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:42.501003  305678 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:42.501081  305678 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:42.501327  305678 kubeadm.go:319] [mark-control-plane] Marking the node auto-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:42.501376  305678 kubeadm.go:319] [bootstrap-token] Using token: lvkpe0.dg8z2fbad7xa25ob
	I1216 03:06:42.502851  305678 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:42.502987  305678 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:42.503101  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:42.503288  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:42.503482  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:42.503640  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:42.503758  305678 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:42.503965  305678 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:42.504037  305678 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:42.504108  305678 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:42.504119  305678 kubeadm.go:319] 
	I1216 03:06:42.504203  305678 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:42.504215  305678 kubeadm.go:319] 
	I1216 03:06:42.504329  305678 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:42.504345  305678 kubeadm.go:319] 
	I1216 03:06:42.504395  305678 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:42.504479  305678 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:42.504568  305678 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:42.504579  305678 kubeadm.go:319] 
	I1216 03:06:42.504668  305678 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:42.504683  305678 kubeadm.go:319] 
	I1216 03:06:42.504765  305678 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:42.504775  305678 kubeadm.go:319] 
	I1216 03:06:42.504864  305678 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:42.504998  305678 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:42.505082  305678 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:42.505091  305678 kubeadm.go:319] 
	I1216 03:06:42.505215  305678 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:42.505315  305678 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:42.505323  305678 kubeadm.go:319] 
	I1216 03:06:42.505423  305678 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505558  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:42.505584  305678 kubeadm.go:319] 	--control-plane 
	I1216 03:06:42.505592  305678 kubeadm.go:319] 
	I1216 03:06:42.505680  305678 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:42.505686  305678 kubeadm.go:319] 
	I1216 03:06:42.505749  305678 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505864  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:42.505877  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:42.505884  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:42.507282  305678 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:06:40.556500  301866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:40.561584  301866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:40.561613  301866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:40.577774  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:41.613918  301866 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.036089237s)
	I1216 03:06:41.613972  301866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:41.614150  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:41.614173  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-742794 minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=embed-certs-742794 minikube.k8s.io/primary=true
	I1216 03:06:41.626342  301866 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:41.845142  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.345943  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.845105  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.345902  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.845135  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.345102  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.846051  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.345989  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.416661  301866 kubeadm.go:1114] duration metric: took 3.802575761s to wait for elevateKubeSystemPrivileges
	I1216 03:06:45.416708  301866 kubeadm.go:403] duration metric: took 16.875245445s to StartCluster
	I1216 03:06:45.416731  301866 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.416953  301866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:45.418953  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.419173  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:45.419182  301866 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:45.419261  301866 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:45.419359  301866 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-742794"
	I1216 03:06:45.419381  301866 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-742794"
	I1216 03:06:45.419396  301866 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:45.419414  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.419459  301866 addons.go:70] Setting default-storageclass=true in profile "embed-certs-742794"
	I1216 03:06:45.419480  301866 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-742794"
	I1216 03:06:45.419894  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.420161  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.424569  301866 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:45.425946  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:45.449105  301866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1216 03:06:42.026493  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:44.525591  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:45.450234  301866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.450254  301866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:45.450315  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.450918  301866 addons.go:239] Setting addon default-storageclass=true in "embed-certs-742794"
	I1216 03:06:45.451884  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.452391  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.474794  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.477242  301866 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.477258  301866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:45.477348  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.507412  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.532004  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:45.601352  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.618429  301866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:45.642176  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.751205  301866 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:45.926484  301866 node_ready.go:35] waiting up to 6m0s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:45.931875  301866 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:42.187278  311649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.360463421s)
	I1216 03:06:42.187316  311649 kic.go:203] duration metric: took 4.360631679s to extract preloaded images to volume ...
	W1216 03:06:42.187436  311649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:42.187482  311649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:42.187655  311649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:42.264475  311649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-646016 --name kindnet-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-646016 --network kindnet-646016 --ip 192.168.76.2 --volume kindnet-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:42.589318  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Running}}
	I1216 03:06:42.613344  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.636793  311649 cli_runner.go:164] Run: docker exec kindnet-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:42.692951  311649 oci.go:144] the created container "kindnet-646016" has a running status.
	I1216 03:06:42.693027  311649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa...
	I1216 03:06:42.723209  311649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:42.759298  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.788064  311649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:42.788107  311649 kic_runner.go:114] Args: [docker exec --privileged kindnet-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:42.841532  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.870136  311649 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:42.870241  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:42.900132  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:42.900484  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:42.900507  311649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:42.901354  311649 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55522->127.0.0.1:33109: read: connection reset by peer
	I1216 03:06:46.051362  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.051391  311649 ubuntu.go:182] provisioning hostname "kindnet-646016"
	I1216 03:06:46.051471  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.071710  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.072035  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.072054  311649 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-646016 && echo "kindnet-646016" | sudo tee /etc/hostname
	I1216 03:06:46.229313  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.229390  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.250802  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.251099  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.251120  311649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:46.394197  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:46.394227  311649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:46.394258  311649 ubuntu.go:190] setting up certificates
	I1216 03:06:46.394271  311649 provision.go:84] configureAuth start
	I1216 03:06:46.394331  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:46.416666  311649 provision.go:143] copyHostCerts
	I1216 03:06:46.416740  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:46.416755  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:46.416885  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:46.417042  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:46.417058  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:46.417120  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:46.417250  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:46.417265  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:46.417314  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:46.417441  311649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.kindnet-646016 san=[127.0.0.1 192.168.76.2 kindnet-646016 localhost minikube]
	I1216 03:06:46.669146  311649 provision.go:177] copyRemoteCerts
	I1216 03:06:46.669199  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:46.669229  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.689779  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:46.791881  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:06:46.813593  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:46.832367  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 03:06:46.850692  311649 provision.go:87] duration metric: took 456.406984ms to configureAuth
	I1216 03:06:46.850726  311649 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:46.850934  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.851035  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.871285  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.871493  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.871507  311649 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:42.508558  305678 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:42.513406  305678 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:42.513425  305678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:42.529253  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:42.791486  305678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:42.791569  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.791628  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646016 minikube.k8s.io/updated_at=2025_12_16T03_06_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-646016 minikube.k8s.io/primary=true
	I1216 03:06:42.804265  305678 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:42.902143  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.402756  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.903006  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.402268  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.902852  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.403072  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.902749  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.403233  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.902362  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.975382  305678 kubeadm.go:1114] duration metric: took 4.183882801s to wait for elevateKubeSystemPrivileges
	I1216 03:06:46.975415  305678 kubeadm.go:403] duration metric: took 14.440090912s to StartCluster
	I1216 03:06:46.975437  305678 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.975508  305678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:46.977140  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.977403  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:46.977404  305678 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:46.977486  305678 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:46.977579  305678 addons.go:70] Setting storage-provisioner=true in profile "auto-646016"
	I1216 03:06:46.977599  305678 addons.go:70] Setting default-storageclass=true in profile "auto-646016"
	I1216 03:06:46.977606  305678 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.977650  305678 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-646016"
	I1216 03:06:46.977607  305678 addons.go:239] Setting addon storage-provisioner=true in "auto-646016"
	I1216 03:06:46.977743  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:46.978050  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.978306  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.982308  305678 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:46.983620  305678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:47.002437  305678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:47.002605  305678 addons.go:239] Setting addon default-storageclass=true in "auto-646016"
	I1216 03:06:47.002668  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:47.003259  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:47.003564  305678 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.003579  305678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:47.003634  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.035685  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.038358  305678 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.038384  305678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:47.038454  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.063766  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.081171  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:47.136199  305678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:47.154654  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.183681  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.284544  305678 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:47.286230  305678 node_ready.go:35] waiting up to 15m0s for node "auto-646016" to be "Ready" ...
	I1216 03:06:47.496268  305678 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:47.193617  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:47.194051  311649 machine.go:97] duration metric: took 4.323568124s to provisionDockerMachine
	I1216 03:06:47.194092  311649 client.go:176] duration metric: took 10.017462228s to LocalClient.Create
	I1216 03:06:47.194125  311649 start.go:167] duration metric: took 10.017552786s to libmachine.API.Create "kindnet-646016"
	I1216 03:06:47.194137  311649 start.go:293] postStartSetup for "kindnet-646016" (driver="docker")
	I1216 03:06:47.194157  311649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:47.194247  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:47.194306  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.220949  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.335239  311649 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:47.339735  311649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:47.339764  311649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:47.339779  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:47.339871  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:47.339980  311649 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:47.340094  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:47.348131  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:47.372022  311649 start.go:296] duration metric: took 177.869291ms for postStartSetup
	I1216 03:06:47.372443  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.397221  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:47.397550  311649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:47.397606  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.415859  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.518022  311649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:47.523427  311649 start.go:128] duration metric: took 10.349106383s to createHost
	I1216 03:06:47.523456  311649 start.go:83] releasing machines lock for "kindnet-646016", held for 10.349266687s
	I1216 03:06:47.523530  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.546521  311649 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:47.546578  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.546599  311649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:47.546669  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.570313  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.570302  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.721354  311649 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:47.728115  311649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:47.764096  311649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:47.769332  311649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:47.769416  311649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:47.800234  311649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:47.800264  311649 start.go:496] detecting cgroup driver to use...
	I1216 03:06:47.800299  311649 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:47.800346  311649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:47.816262  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:47.828857  311649 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:47.828917  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:47.846000  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:47.864948  311649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:47.954521  311649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:48.052042  311649 docker.go:234] disabling docker service ...
	I1216 03:06:48.052109  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:48.070097  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:48.084175  311649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:48.172571  311649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:48.260483  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:48.273064  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:48.287395  311649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:48.287445  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.299225  311649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:48.299303  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.308963  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.318151  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.326922  311649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:48.336676  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.346533  311649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.363190  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.372458  311649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:48.380763  311649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:48.388403  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.471564  311649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:06:48.611303  311649 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:48.611368  311649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:48.615409  311649 start.go:564] Will wait 60s for crictl version
	I1216 03:06:48.615453  311649 ssh_runner.go:195] Run: which crictl
	I1216 03:06:48.619372  311649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:48.644746  311649 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:48.644839  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.673737  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.702915  311649 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:45.932861  301866 addons.go:530] duration metric: took 513.595889ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:46.256598  301866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-742794" context rescaled to 1 replicas
	W1216 03:06:47.929443  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:49.930144  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:46.525728  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:48.526032  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:50.526076  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:48.704147  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:48.721392  311649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:48.725738  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.736033  311649 kubeadm.go:884] updating cluster {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:48.736149  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:48.736193  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.766912  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.766931  311649 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:48.766981  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.793469  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.793488  311649 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:48.793496  311649 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:48.793584  311649 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 03:06:48.793668  311649 ssh_runner.go:195] Run: crio config
	I1216 03:06:48.842069  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:48.842093  311649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:48.842113  311649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-646016 NodeName:kindnet-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:48.842278  311649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:48.842350  311649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:48.851041  311649 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:48.851093  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:48.859976  311649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 03:06:48.873334  311649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:48.888764  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 03:06:48.901633  311649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:48.905305  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.915330  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.995098  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:49.027736  311649 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016 for IP: 192.168.76.2
	I1216 03:06:49.027754  311649 certs.go:195] generating shared ca certs ...
	I1216 03:06:49.027769  311649 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.027940  311649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:49.027991  311649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:49.027999  311649 certs.go:257] generating profile certs ...
	I1216 03:06:49.028050  311649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key
	I1216 03:06:49.028069  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt with IP's: []
	I1216 03:06:49.358443  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt ...
	I1216 03:06:49.358470  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt: {Name:mkd8b5e5f321efa7e9844310e79db14d2c69cdf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358640  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key ...
	I1216 03:06:49.358651  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key: {Name:mk0a2ea2343a207eb4a3896019c7d6511f76de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358724  311649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97
	I1216 03:06:49.358739  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:06:49.547719  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 ...
	I1216 03:06:49.547746  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97: {Name:mk0ea02365886ae096b9e5de77c47711b9643fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.547929  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 ...
	I1216 03:06:49.547944  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97: {Name:mke361247d57cd7cd2fc7dc06040d57afdcb0c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.548042  311649 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt
	I1216 03:06:49.548133  311649 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key
	I1216 03:06:49.548195  311649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key
	I1216 03:06:49.548210  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt with IP's: []
	I1216 03:06:49.631433  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt ...
	I1216 03:06:49.631466  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt: {Name:mkad790b016b1279eb196a1c4cb8b1281ceb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631654  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key ...
	I1216 03:06:49.631672  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key: {Name:mk3cdaee6c7ccfd128b07eb42506350a5c451ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631986  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:49.632029  311649 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:49.632038  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:49.632063  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:49.632086  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:49.632113  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:49.632153  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:49.632685  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:49.654165  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:49.673281  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:49.691565  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:49.710072  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:06:49.727451  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:49.745210  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:49.762460  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:06:49.779547  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:49.799563  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:49.817781  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:49.836953  311649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:49.850440  311649 ssh_runner.go:195] Run: openssl version
	I1216 03:06:49.857619  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.865683  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:49.873871  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877892  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877973  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.913978  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:49.921860  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:49.930065  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.937793  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:49.945255  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949134  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949180  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.985209  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:49.993787  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:50.002243  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.011462  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:50.019433  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023581  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023639  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.058987  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.067234  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.074980  311649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:50.078979  311649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:50.079045  311649 kubeadm.go:401] StartCluster: {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:50.079128  311649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:50.079165  311649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:50.107017  311649 cri.go:89] found id: ""
	I1216 03:06:50.107074  311649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:50.115787  311649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:50.124409  311649 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:50.124473  311649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:50.132370  311649 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:50.132387  311649 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:50.132436  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:50.140621  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:50.140678  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:50.148112  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:50.155314  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:50.155365  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:50.163444  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.172463  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:50.172506  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.181207  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:50.189958  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:50.190008  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:50.198269  311649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:50.259675  311649 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:50.322773  311649 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:47.498757  305678 addons.go:530] duration metric: took 521.268615ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:47.789777  305678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-646016" context rescaled to 1 replicas
	W1216 03:06:49.289937  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:51.290033  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:52.430325  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:54.929684  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:53.025423  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:54.025688  296715 pod_ready.go:94] pod "coredns-66bc5c9577-xndlx" is "Ready"
	I1216 03:06:54.025718  296715 pod_ready.go:86] duration metric: took 37.505799828s for pod "coredns-66bc5c9577-xndlx" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.028581  296715 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.032600  296715 pod_ready.go:94] pod "etcd-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.032625  296715 pod_ready.go:86] duration metric: took 4.021316ms for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.034486  296715 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.038375  296715 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.038397  296715 pod_ready.go:86] duration metric: took 3.88453ms for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.042484  296715 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.223347  296715 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.223380  296715 pod_ready.go:86] duration metric: took 180.875268ms for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.423344  296715 pod_ready.go:83] waiting for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.823755  296715 pod_ready.go:94] pod "kube-proxy-2g6tn" is "Ready"
	I1216 03:06:54.823786  296715 pod_ready.go:86] duration metric: took 400.418478ms for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.023768  296715 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423515  296715 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:55.423544  296715 pod_ready.go:86] duration metric: took 399.751113ms for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423557  296715 pod_ready.go:40] duration metric: took 38.907102315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:55.468787  296715 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:55.471584  296715 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079165" cluster and "default" namespace by default
	W1216 03:06:53.290307  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:55.789926  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:56.429615  301866 node_ready.go:49] node "embed-certs-742794" is "Ready"
	I1216 03:06:56.429647  301866 node_ready.go:38] duration metric: took 10.503121729s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:56.429666  301866 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:56.429726  301866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:56.442056  301866 api_server.go:72] duration metric: took 11.022842819s to wait for apiserver process to appear ...
	I1216 03:06:56.442082  301866 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:56.442103  301866 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 03:06:56.447056  301866 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 03:06:56.448029  301866 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:56.448055  301866 api_server.go:131] duration metric: took 5.963373ms to wait for apiserver health ...
	I1216 03:06:56.448066  301866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:56.451399  301866 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:56.451426  301866 system_pods.go:61] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.451432  301866 system_pods.go:61] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.451438  301866 system_pods.go:61] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.451444  301866 system_pods.go:61] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.451448  301866 system_pods.go:61] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.451451  301866 system_pods.go:61] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.451455  301866 system_pods.go:61] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.451461  301866 system_pods.go:61] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.451468  301866 system_pods.go:74] duration metric: took 3.397556ms to wait for pod list to return data ...
	I1216 03:06:56.451480  301866 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:56.453702  301866 default_sa.go:45] found service account: "default"
	I1216 03:06:56.453730  301866 default_sa.go:55] duration metric: took 2.242699ms for default service account to be created ...
	I1216 03:06:56.453737  301866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:56.456453  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.456483  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.456491  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.456499  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.456505  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.456511  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.456517  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.456522  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.456533  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.456552  301866 retry.go:31] will retry after 190.871511ms: missing components: kube-dns
	I1216 03:06:56.652497  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.652527  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.652533  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.652539  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.652545  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.652551  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.652556  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.652561  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.652569  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.652589  301866 retry.go:31] will retry after 263.135615ms: missing components: kube-dns
	I1216 03:06:56.920090  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.920129  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.920138  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.920147  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.920153  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.920160  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.920165  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.920175  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.920188  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.920211  301866 retry.go:31] will retry after 424.081703ms: missing components: kube-dns
	I1216 03:06:57.348588  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.348624  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:57.348633  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.348641  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.348647  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.348652  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.348697  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.348727  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.348738  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:57.348759  301866 retry.go:31] will retry after 548.921416ms: missing components: kube-dns
	I1216 03:06:57.902738  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.902773  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Running
	I1216 03:06:57.902782  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.902787  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.902793  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.902799  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.902804  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.902809  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.902814  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Running
	I1216 03:06:57.902854  301866 system_pods.go:126] duration metric: took 1.449111047s to wait for k8s-apps to be running ...
	I1216 03:06:57.902864  301866 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:57.902920  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:57.918800  301866 system_svc.go:56] duration metric: took 15.925631ms WaitForService to wait for kubelet
	I1216 03:06:57.918845  301866 kubeadm.go:587] duration metric: took 12.499634394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:57.918867  301866 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:57.922077  301866 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:57.922106  301866 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:57.922129  301866 node_conditions.go:105] duration metric: took 3.256352ms to run NodePressure ...
	I1216 03:06:57.922144  301866 start.go:242] waiting for startup goroutines ...
	I1216 03:06:57.922158  301866 start.go:247] waiting for cluster config update ...
	I1216 03:06:57.922174  301866 start.go:256] writing updated cluster config ...
	I1216 03:06:57.922469  301866 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:57.928097  301866 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:57.932548  301866 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.937661  301866 pod_ready.go:94] pod "coredns-66bc5c9577-rz62v" is "Ready"
	I1216 03:06:57.937691  301866 pod_ready.go:86] duration metric: took 5.118409ms for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.940008  301866 pod_ready.go:83] waiting for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.944367  301866 pod_ready.go:94] pod "etcd-embed-certs-742794" is "Ready"
	I1216 03:06:57.944388  301866 pod_ready.go:86] duration metric: took 4.358597ms for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.946807  301866 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.952672  301866 pod_ready.go:94] pod "kube-apiserver-embed-certs-742794" is "Ready"
	I1216 03:06:57.952695  301866 pod_ready.go:86] duration metric: took 5.836334ms for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.954866  301866 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.333247  301866 pod_ready.go:94] pod "kube-controller-manager-embed-certs-742794" is "Ready"
	I1216 03:06:58.333274  301866 pod_ready.go:86] duration metric: took 378.387824ms for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.532264  301866 pod_ready.go:83] waiting for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.933597  301866 pod_ready.go:94] pod "kube-proxy-899tv" is "Ready"
	I1216 03:06:58.933622  301866 pod_ready.go:86] duration metric: took 401.335129ms for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.133550  301866 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532905  301866 pod_ready.go:94] pod "kube-scheduler-embed-certs-742794" is "Ready"
	I1216 03:06:59.532933  301866 pod_ready.go:86] duration metric: took 399.353784ms for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532945  301866 pod_ready.go:40] duration metric: took 1.604812413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.576977  301866 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:59.578834  301866 out.go:179] * Done! kubectl is now configured to use "embed-certs-742794" cluster and "default" namespace by default
	I1216 03:07:00.734146  311649 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:07:00.734241  311649 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:07:00.734336  311649 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:07:00.734445  311649 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:07:00.734513  311649 kubeadm.go:319] OS: Linux
	I1216 03:07:00.734595  311649 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:07:00.734665  311649 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:07:00.734745  311649 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:07:00.734807  311649 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:07:00.734941  311649 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:07:00.735023  311649 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:07:00.735095  311649 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:07:00.735168  311649 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:07:00.735274  311649 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:07:00.735439  311649 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:07:00.735570  311649 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:07:00.735660  311649 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:07:00.737122  311649 out.go:252]   - Generating certificates and keys ...
	I1216 03:07:00.737200  311649 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:07:00.737281  311649 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:07:00.737346  311649 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:07:00.737403  311649 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:07:00.737487  311649 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:07:00.737563  311649 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:07:00.737637  311649 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:07:00.737781  311649 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.737858  311649 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:07:00.737979  311649 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.738058  311649 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:07:00.738150  311649 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:07:00.738205  311649 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:07:00.738283  311649 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:07:00.738376  311649 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:07:00.738446  311649 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:07:00.738501  311649 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:07:00.738579  311649 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:07:00.738633  311649 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:07:00.738736  311649 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:07:00.738800  311649 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:07:00.740287  311649 out.go:252]   - Booting up control plane ...
	I1216 03:07:00.740372  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:07:00.740438  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:07:00.740524  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:07:00.740652  311649 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:07:00.740772  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:07:00.740946  311649 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:07:00.741073  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:07:00.741126  311649 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:07:00.741278  311649 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:07:00.741401  311649 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:07:00.741468  311649 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.981193ms
	I1216 03:07:00.741568  311649 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:07:00.741715  311649 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:07:00.741810  311649 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:07:00.741982  311649 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:07:00.742095  311649 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.877570449s
	I1216 03:07:00.742199  311649 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.443366435s
	I1216 03:07:00.742292  311649 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001072727s
	I1216 03:07:00.742448  311649 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:07:00.742548  311649 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:07:00.742619  311649 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:07:00.742803  311649 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:07:00.742872  311649 kubeadm.go:319] [bootstrap-token] Using token: qf8hji.ax4hpzqgdccyhdsp
	I1216 03:07:00.744251  311649 out.go:252]   - Configuring RBAC rules ...
	I1216 03:07:00.744348  311649 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:07:00.744421  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:07:00.744557  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:07:00.744689  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:07:00.744849  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:07:00.744950  311649 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:07:00.745043  311649 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:07:00.745086  311649 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:07:00.745140  311649 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:07:00.745153  311649 kubeadm.go:319] 
	I1216 03:07:00.745212  311649 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:07:00.745218  311649 kubeadm.go:319] 
	I1216 03:07:00.745298  311649 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:07:00.745308  311649 kubeadm.go:319] 
	I1216 03:07:00.745347  311649 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:07:00.745409  311649 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:07:00.745452  311649 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:07:00.745460  311649 kubeadm.go:319] 
	I1216 03:07:00.745522  311649 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:07:00.745539  311649 kubeadm.go:319] 
	I1216 03:07:00.745581  311649 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:07:00.745587  311649 kubeadm.go:319] 
	I1216 03:07:00.745630  311649 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:07:00.745694  311649 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:07:00.745766  311649 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:07:00.745773  311649 kubeadm.go:319] 
	I1216 03:07:00.745892  311649 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:07:00.745971  311649 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:07:00.745977  311649 kubeadm.go:319] 
	I1216 03:07:00.746075  311649 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746254  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:07:00.746296  311649 kubeadm.go:319] 	--control-plane 
	I1216 03:07:00.746311  311649 kubeadm.go:319] 
	I1216 03:07:00.746393  311649 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:07:00.746400  311649 kubeadm.go:319] 
	I1216 03:07:00.746491  311649 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746595  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:07:00.746611  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:07:00.748130  311649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:58.288855  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:58.790092  305678 node_ready.go:49] node "auto-646016" is "Ready"
	I1216 03:06:58.790126  305678 node_ready.go:38] duration metric: took 11.503870198s for node "auto-646016" to be "Ready" ...
	I1216 03:06:58.790140  305678 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:58.790207  305678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:58.808029  305678 api_server.go:72] duration metric: took 11.830592066s to wait for apiserver process to appear ...
	I1216 03:06:58.808059  305678 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:58.808080  305678 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:06:58.815119  305678 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:06:58.816423  305678 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:58.816504  305678 api_server.go:131] duration metric: took 8.436974ms to wait for apiserver health ...
	I1216 03:06:58.816533  305678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:58.821280  305678 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:58.821368  305678 system_pods.go:61] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.821400  305678 system_pods.go:61] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.821419  305678 system_pods.go:61] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.821439  305678 system_pods.go:61] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.821456  305678 system_pods.go:61] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.821475  305678 system_pods.go:61] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.821485  305678 system_pods.go:61] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.821492  305678 system_pods.go:61] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.821502  305678 system_pods.go:74] duration metric: took 4.950516ms to wait for pod list to return data ...
	I1216 03:06:58.821546  305678 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:58.824059  305678 default_sa.go:45] found service account: "default"
	I1216 03:06:58.824080  305678 default_sa.go:55] duration metric: took 2.522405ms for default service account to be created ...
	I1216 03:06:58.824091  305678 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:58.827274  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:58.827304  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.827312  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.827321  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.827326  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.827331  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.827341  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.827347  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.827358  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.827393  305678 retry.go:31] will retry after 259.79372ms: missing components: kube-dns
	I1216 03:06:59.091902  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.091931  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:59.091936  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.091960  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.091965  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.091971  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.091976  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.091984  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.091991  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:59.092011  305678 retry.go:31] will retry after 323.360238ms: missing components: kube-dns
	I1216 03:06:59.419712  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.419750  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Running
	I1216 03:06:59.419760  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.419766  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.419782  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.419793  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.419800  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.419815  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.419838  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Running
	I1216 03:06:59.419849  305678 system_pods.go:126] duration metric: took 595.751665ms to wait for k8s-apps to be running ...
	I1216 03:06:59.419884  305678 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:59.419987  305678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:59.433260  305678 system_svc.go:56] duration metric: took 13.390186ms WaitForService to wait for kubelet
	I1216 03:06:59.433294  305678 kubeadm.go:587] duration metric: took 12.45586268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:59.433320  305678 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:59.436233  305678 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:59.436259  305678 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:59.436274  305678 node_conditions.go:105] duration metric: took 2.942077ms to run NodePressure ...
	I1216 03:06:59.436285  305678 start.go:242] waiting for startup goroutines ...
	I1216 03:06:59.436292  305678 start.go:247] waiting for cluster config update ...
	I1216 03:06:59.436331  305678 start.go:256] writing updated cluster config ...
	I1216 03:06:59.436568  305678 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:59.440748  305678 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.444513  305678 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.448521  305678 pod_ready.go:94] pod "coredns-66bc5c9577-w7kfz" is "Ready"
	I1216 03:06:59.448540  305678 pod_ready.go:86] duration metric: took 4.002957ms for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.450549  305678 pod_ready.go:83] waiting for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.454198  305678 pod_ready.go:94] pod "etcd-auto-646016" is "Ready"
	I1216 03:06:59.454220  305678 pod_ready.go:86] duration metric: took 3.644632ms for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.456274  305678 pod_ready.go:83] waiting for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.459897  305678 pod_ready.go:94] pod "kube-apiserver-auto-646016" is "Ready"
	I1216 03:06:59.459920  305678 pod_ready.go:86] duration metric: took 3.627374ms for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.462673  305678 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.845697  305678 pod_ready.go:94] pod "kube-controller-manager-auto-646016" is "Ready"
	I1216 03:06:59.845724  305678 pod_ready.go:86] duration metric: took 383.032974ms for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.046236  305678 pod_ready.go:83] waiting for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.445412  305678 pod_ready.go:94] pod "kube-proxy-hwssz" is "Ready"
	I1216 03:07:00.445441  305678 pod_ready.go:86] duration metric: took 399.181443ms for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.645598  305678 pod_ready.go:83] waiting for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046668  305678 pod_ready.go:94] pod "kube-scheduler-auto-646016" is "Ready"
	I1216 03:07:01.046698  305678 pod_ready.go:86] duration metric: took 401.069816ms for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046714  305678 pod_ready.go:40] duration metric: took 1.605935876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:07:01.100168  305678 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:07:01.102443  305678 out.go:179] * Done! kubectl is now configured to use "auto-646016" cluster and "default" namespace by default
	I1216 03:07:00.749233  311649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:07:00.753983  311649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:07:00.753999  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:07:00.769555  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:07:00.983333  311649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:07:00.983402  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:00.983420  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-646016 minikube.k8s.io/updated_at=2025_12_16T03_07_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=kindnet-646016 minikube.k8s.io/primary=true
	I1216 03:07:00.994666  311649 ops.go:34] apiserver oom_adj: -16
	I1216 03:07:01.075445  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:01.575786  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.076390  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.575611  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.075547  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.575755  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.076148  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.575753  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.075504  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.576052  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.076085  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.148434  311649 kubeadm.go:1114] duration metric: took 5.165094916s to wait for elevateKubeSystemPrivileges
	I1216 03:07:06.148465  311649 kubeadm.go:403] duration metric: took 16.069424018s to StartCluster
	I1216 03:07:06.148481  311649 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.148539  311649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:07:06.150375  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.150605  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:07:06.150611  311649 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:07:06.150712  311649 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:07:06.150851  311649 addons.go:70] Setting storage-provisioner=true in profile "kindnet-646016"
	I1216 03:07:06.150859  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:06.150876  311649 addons.go:239] Setting addon storage-provisioner=true in "kindnet-646016"
	I1216 03:07:06.150888  311649 addons.go:70] Setting default-storageclass=true in profile "kindnet-646016"
	I1216 03:07:06.150909  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.150910  311649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-646016"
	I1216 03:07:06.151282  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.151441  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.152143  311649 out.go:179] * Verifying Kubernetes components...
	I1216 03:07:06.153565  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:07:06.176673  311649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:07:06.178156  311649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.178180  311649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:07:06.178249  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.179311  311649 addons.go:239] Setting addon default-storageclass=true in "kindnet-646016"
	I1216 03:07:06.179358  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.179811  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.206511  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.210644  311649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.210666  311649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:07:06.210723  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.239999  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.243953  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:07:06.320537  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:07:06.324892  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.358779  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.419573  311649 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 03:07:06.421111  311649 node_ready.go:35] waiting up to 15m0s for node "kindnet-646016" to be "Ready" ...
	I1216 03:07:06.616907  311649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:07:06.618138  311649 addons.go:530] duration metric: took 467.411759ms for enable addons: enabled=[storage-provisioner default-storageclass]
	
	
	==> CRI-O <==
	Dec 16 03:06:41 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:41.748007817Z" level=info msg="Started container" PID=1748 containerID=f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper id=37913eb5-bfa5-44de-afab-44a1a60d2949 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e983c3424c0f8f2c018d765fad8f3bf6cae711961033abbdc4fb7d1dca9884f6
	Dec 16 03:06:42 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:42.368075439Z" level=info msg="Removing container: cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45" id=0279ddc5-82bf-4173-8d9f-13a4b5a9325d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:42 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:42.377251268Z" level=info msg="Removed container cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=0279ddc5-82bf-4173-8d9f-13a4b5a9325d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.379119045Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dcc9a025-6092-4cf1-b87e-ccc3c6bff1f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38009357Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=983ff651-cf81-4e14-a826-d9574b872308 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38121106Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1891ec5b-2913-42e3-ad86-23e0fc6f17aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.381347465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38577982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.385951188Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3af0fc3a33a2d3a46d59856fd43a675dd2f3723dff4f9ceccf1e4735543bf537/merged/etc/passwd: no such file or directory"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.385975612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3af0fc3a33a2d3a46d59856fd43a675dd2f3723dff4f9ceccf1e4735543bf537/merged/etc/group: no such file or directory"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.386192987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.427153131Z" level=info msg="Created container 6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9: kube-system/storage-provisioner/storage-provisioner" id=1891ec5b-2913-42e3-ad86-23e0fc6f17aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.42797118Z" level=info msg="Starting container: 6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9" id=3aa3c8e2-2bf9-4ca8-96a9-83d6a34ef0fc name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.430414722Z" level=info msg="Started container" PID=1762 containerID=6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9 description=kube-system/storage-provisioner/storage-provisioner id=3aa3c8e2-2bf9-4ca8-96a9-83d6a34ef0fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=414e8ac3bed89aa5672bd11b143e7e2f6de3690caaa4bba4977843ed83ae2ca3
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.236331982Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3897ca6f-2c79-4002-963c-370405a5ac9b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.23750281Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c096183f-cb79-4a30-88fe-db24b4424792 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.238526278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=b95cc50e-8c27-434b-a0c9-172667694e5b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.238676855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.243928988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.244359624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.277191596Z" level=info msg="Created container 9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=b95cc50e-8c27-434b-a0c9-172667694e5b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.27791959Z" level=info msg="Starting container: 9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916" id=eb203528-8ff4-491f-b014-28987dd48c87 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.2797455Z" level=info msg="Started container" PID=1800 containerID=9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper id=eb203528-8ff4-491f-b014-28987dd48c87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e983c3424c0f8f2c018d765fad8f3bf6cae711961033abbdc4fb7d1dca9884f6
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.428498364Z" level=info msg="Removing container: f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3" id=53180ffb-c8cf-4ad2-8814-1a9a5395e1d1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.438839257Z" level=info msg="Removed container f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=53180ffb-c8cf-4ad2-8814-1a9a5395e1d1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	9e0a9aaa36217       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   e983c3424c0f8       dashboard-metrics-scraper-6ffb444bf9-rqq6z             kubernetes-dashboard
	6eebae3db0a17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   414e8ac3bed89       storage-provisioner                                    kube-system
	7b84397dc8626       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   be3a3a11415cd       kubernetes-dashboard-855c9754f9-s5jhg                  kubernetes-dashboard
	b8d4c9ffcedfa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   7d8e1a1ad8ab1       coredns-66bc5c9577-xndlx                               kube-system
	9ef7e22f5cd62       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   8c93e4944a79d       busybox                                                default
	e2bb736213932       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   414e8ac3bed89       storage-provisioner                                    kube-system
	670184db3f804       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f1b56408d0141       kindnet-w5gmn                                          kube-system
	07671a687288f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   f2581db915191       kube-proxy-2g6tn                                       kube-system
	7f87e3c1123f6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   8be1ff7a1fd80       kube-scheduler-default-k8s-diff-port-079165            kube-system
	8c44d80f00165       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   2f35eae814b79       kube-apiserver-default-k8s-diff-port-079165            kube-system
	f08cb369199f4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   72428c8695a8d       kube-controller-manager-default-k8s-diff-port-079165   kube-system
	9eb509b8cbb5d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   00fda6074bfc2       etcd-default-k8s-diff-port-079165                      kube-system
	
	
	==> coredns [b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55825 - 63081 "HINFO IN 6203087275699617728.7508908622677758774. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015988165s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=default-k8s-diff-port-079165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:07:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-079165
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                67cf8032-f343-4067-841b-e5dc637b7a61
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-xndlx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-079165                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-w5gmn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079165             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079165    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-2g6tn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079165             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rqq6z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s5jhg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-079165 event: Registered Node default-k8s-diff-port-079165 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-079165 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-079165 event: Registered Node default-k8s-diff-port-079165 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f] <==
	{"level":"warn","ts":"2025-12-16T03:06:19.715248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.439583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-16T03:06:19.715255Z","caller":"traceutil/trace.go:172","msg":"trace[768632770] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"363.455218ms","start":"2025-12-16T03:06:19.351781Z","end":"2025-12-16T03:06:19.715237Z","steps":["trace[768632770] 'process raft request'  (duration: 363.197001ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.715285Z","caller":"traceutil/trace.go:172","msg":"trace[166994788] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:558; }","duration":"165.485238ms","start":"2025-12-16T03:06:19.549792Z","end":"2025-12-16T03:06:19.715277Z","steps":["trace[166994788] 'agreement among raft nodes before linearized reading'  (duration: 165.378568ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.715299Z","caller":"traceutil/trace.go:172","msg":"trace[1783366334] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"363.030687ms","start":"2025-12-16T03:06:19.352254Z","end":"2025-12-16T03:06:19.715285Z","steps":["trace[1783366334] 'process raft request'  (duration: 362.843883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:19.715612Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.352241Z","time spent":"363.311683ms","remote":"127.0.0.1:56048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:551 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:4688 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >"}
	{"level":"warn","ts":"2025-12-16T03:06:19.715353Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.351765Z","time spent":"363.528729ms","remote":"127.0.0.1:56048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4918,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:548 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-12-16T03:06:19.715394Z","caller":"traceutil/trace.go:172","msg":"trace[2054746561] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"363.027775ms","start":"2025-12-16T03:06:19.352354Z","end":"2025-12-16T03:06:19.715382Z","steps":["trace[2054746561] 'process raft request'  (duration: 362.775255ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:19.715833Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.352334Z","time spent":"363.437074ms","remote":"127.0.0.1:55514","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4220,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" mod_revision:544 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" value_size:4134 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" > >"}
	{"level":"warn","ts":"2025-12-16T03:06:19.715434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.63598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-16T03:06:19.716020Z","caller":"traceutil/trace.go:172","msg":"trace[86406049] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:558; }","duration":"166.216163ms","start":"2025-12-16T03:06:19.549792Z","end":"2025-12-16T03:06:19.716008Z","steps":["trace[86406049] 'agreement among raft nodes before linearized reading'  (duration: 165.581475ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.966963Z","caller":"traceutil/trace.go:172","msg":"trace[224739754] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"242.173742ms","start":"2025-12-16T03:06:19.724768Z","end":"2025-12-16T03:06:19.966942Z","steps":["trace[224739754] 'process raft request'  (duration: 154.540373ms)","trace[224739754] 'compare'  (duration: 87.50409ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:06:25.551884Z","caller":"traceutil/trace.go:172","msg":"trace[1742646181] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"173.20599ms","start":"2025-12-16T03:06:25.378659Z","end":"2025-12-16T03:06:25.551865Z","steps":["trace[1742646181] 'process raft request'  (duration: 173.074465ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.132923Z","caller":"traceutil/trace.go:172","msg":"trace[1011072438] linearizableReadLoop","detail":"{readStateIndex:607; appliedIndex:607; }","duration":"110.425814ms","start":"2025-12-16T03:06:26.022472Z","end":"2025-12-16T03:06:26.132897Z","steps":["trace[1011072438] 'read index received'  (duration: 110.417141ms)","trace[1011072438] 'applied index is now lower than readState.Index'  (duration: 7.103µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.133166Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.676048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-xndlx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-12-16T03:06:26.133264Z","caller":"traceutil/trace.go:172","msg":"trace[1510992810] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-xndlx; range_end:; response_count:1; response_revision:578; }","duration":"110.791232ms","start":"2025-12-16T03:06:26.022461Z","end":"2025-12-16T03:06:26.133253Z","steps":["trace[1510992810] 'agreement among raft nodes before linearized reading'  (duration: 110.563494ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.133542Z","caller":"traceutil/trace.go:172","msg":"trace[1831736857] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"284.585754ms","start":"2025-12-16T03:06:25.848940Z","end":"2025-12-16T03:06:26.133526Z","steps":["trace[1831736857] 'process raft request'  (duration: 284.062898ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.296414Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.171811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2025-12-16T03:06:26.296532Z","caller":"traceutil/trace.go:172","msg":"trace[1045722015] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:579; }","duration":"158.304928ms","start":"2025-12-16T03:06:26.138212Z","end":"2025-12-16T03:06:26.296517Z","steps":["trace[1045722015] 'agreement among raft nodes before linearized reading'  (duration: 97.197347ms)","trace[1045722015] 'range keys from in-memory index tree'  (duration: 60.863931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.299036Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.882048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:7994"}
	{"level":"info","ts":"2025-12-16T03:06:26.299102Z","caller":"traceutil/trace.go:172","msg":"trace[983807624] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:579; }","duration":"158.953056ms","start":"2025-12-16T03:06:26.140131Z","end":"2025-12-16T03:06:26.299084Z","steps":["trace[983807624] 'agreement among raft nodes before linearized reading'  (duration: 158.749276ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.431891Z","caller":"traceutil/trace.go:172","msg":"trace[1990496510] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"124.352549ms","start":"2025-12-16T03:06:26.307518Z","end":"2025-12-16T03:06:26.431870Z","steps":["trace[1990496510] 'process raft request'  (duration: 116.100579ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.552969Z","caller":"traceutil/trace.go:172","msg":"trace[1455186466] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"107.533199ms","start":"2025-12-16T03:06:26.445417Z","end":"2025-12-16T03:06:26.552950Z","steps":["trace[1455186466] 'process raft request'  (duration: 107.086425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:27.288985Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.522871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:6167"}
	{"level":"info","ts":"2025-12-16T03:06:27.289964Z","caller":"traceutil/trace.go:172","msg":"trace[1119916504] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:584; }","duration":"122.505628ms","start":"2025-12-16T03:06:27.167429Z","end":"2025-12-16T03:06:27.289934Z","steps":["trace[1119916504] 'range keys from in-memory index tree'  (duration: 121.439331ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.349201Z","caller":"traceutil/trace.go:172","msg":"trace[882607615] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"107.397008ms","start":"2025-12-16T03:06:41.241767Z","end":"2025-12-16T03:06:41.349164Z","steps":["trace[882607615] 'process raft request'  (duration: 107.231563ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:07:10 up 49 min,  0 user,  load average: 4.05, 3.28, 2.12
	Linux default-k8s-diff-port-079165 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9] <==
	I1216 03:06:15.889591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:15.890018       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 03:06:15.890174       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:15.890190       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:15.890210       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:16.183285       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:16.183325       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:16.183337       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:16.183528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:16.683643       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:16.683677       1 metrics.go:72] Registering metrics
	I1216 03:06:16.683738       1 controller.go:711] "Syncing nftables rules"
	I1216 03:06:26.181862       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:26.181932       1 main.go:301] handling current node
	I1216 03:06:36.181980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:36.182018       1 main.go:301] handling current node
	I1216 03:06:46.181300       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:46.181363       1 main.go:301] handling current node
	I1216 03:06:56.181662       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:56.181699       1 main.go:301] handling current node
	I1216 03:07:06.181848       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:07:06.181884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055] <==
	I1216 03:06:14.773366       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 03:06:14.773295       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:06:14.778623       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:06:14.783224       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:14.787808       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:06:14.788332       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:06:14.788409       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:06:14.788456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:06:14.788483       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:06:14.797534       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:06:14.811403       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:06:14.828883       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 03:06:14.828916       1 policy_source.go:240] refreshing policies
	I1216 03:06:14.835540       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:06:15.160665       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:06:15.199678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:06:15.232184       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:06:15.245459       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:06:15.260572       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:06:15.332036       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.206.126"}
	I1216 03:06:15.372078       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.22.42"}
	I1216 03:06:15.673393       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:06:18.518492       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:06:18.647079       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:06:18.836461       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09] <==
	I1216 03:06:18.107197       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 03:06:18.107238       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 03:06:18.108462       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 03:06:18.108544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 03:06:18.112715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 03:06:18.112745       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 03:06:18.112846       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:06:18.112855       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 03:06:18.112864       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 03:06:18.112937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:06:18.113139       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 03:06:18.113194       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 03:06:18.116695       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:06:18.119019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:06:18.120075       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 03:06:18.129378       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:06:18.129451       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:06:18.129485       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:06:18.129497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:06:18.129505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:06:18.132069       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:06:18.133305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:06:18.133392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 03:06:18.136573       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 03:06:18.138859       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604] <==
	I1216 03:06:15.658999       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:15.734806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:06:15.835162       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:06:15.835229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 03:06:15.835372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:15.869523       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:15.869644       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:06:15.877605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:15.878208       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:06:15.878261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:15.880417       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:15.880450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:15.880577       1 config.go:200] "Starting service config controller"
	I1216 03:06:15.880596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:15.880637       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:15.880650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:15.881000       1 config.go:309] "Starting node config controller"
	I1216 03:06:15.881026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:15.881034       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:15.980977       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:06:15.981032       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:06:15.981073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301] <==
	I1216 03:06:13.297410       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:06:14.711286       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:06:14.711336       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:06:14.711349       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:06:14.711357       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:06:14.765205       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:06:14.765338       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:14.768536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:14.768625       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:14.769631       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:06:14.769717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:06:14.869299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:06:23 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:23.626376     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 16 03:06:24 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:24.302303     726 scope.go:117] "RemoveContainer" containerID="3a7b04394a668e79439508be34c2cea0acdbb7a883b2d55dbe79f3a2134ea093"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:25.309675     726 scope.go:117] "RemoveContainer" containerID="3a7b04394a668e79439508be34c2cea0acdbb7a883b2d55dbe79f3a2134ea093"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:25.310065     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:25.310266     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:26 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:26.313870     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:26 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:26.314082     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:29 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:29.288751     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:29 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:29.289078     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:30 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:30.239380     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s5jhg" podStartSLOduration=3.51342102 podStartE2EDuration="12.239355556s" podCreationTimestamp="2025-12-16 03:06:18 +0000 UTC" firstStartedPulling="2025-12-16 03:06:20.029486343 +0000 UTC m=+7.888367757" lastFinishedPulling="2025-12-16 03:06:28.755420865 +0000 UTC m=+16.614302293" observedRunningTime="2025-12-16 03:06:29.338119841 +0000 UTC m=+17.197001296" watchObservedRunningTime="2025-12-16 03:06:30.239355556 +0000 UTC m=+18.098236992"
	Dec 16 03:06:41 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:41.235647     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:42.365003     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:42.365244     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:42.365464     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:46 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:46.378671     726 scope.go:117] "RemoveContainer" containerID="e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	Dec 16 03:06:49 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:49.289246     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:06:49 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:49.289499     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.235847     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.427176     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.427409     726 scope.go:117] "RemoveContainer" containerID="9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: E1216 03:07:04.427620     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: kubelet.service: Consumed 1.901s CPU time.
	
	
	==> kubernetes-dashboard [7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178] <==
	2025/12/16 03:06:28 Starting overwatch
	2025/12/16 03:06:28 Using namespace: kubernetes-dashboard
	2025/12/16 03:06:28 Using in-cluster config to connect to apiserver
	2025/12/16 03:06:28 Using secret token for csrf signing
	2025/12/16 03:06:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:06:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:06:28 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 03:06:28 Generating JWE encryption key
	2025/12/16 03:06:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:06:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:06:28 Initializing JWE encryption key from synchronized object
	2025/12/16 03:06:28 Creating in-cluster Sidecar client
	2025/12/16 03:06:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:06:28 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9] <==
	I1216 03:06:46.445910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:06:46.453363       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:06:46.453403       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:06:46.455602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:49.911091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:54.171920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:57.770775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:00.825686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.848610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.852873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:07:03.853055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:07:03.853210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5!
	I1216 03:07:03.853203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41786d2c-b62a-4752-9d3d-2698b61108be", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5 became leader
	W1216 03:07:03.855161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.859726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:07:03.953394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5!
	W1216 03:07:05.863134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:05.866637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:07.870465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:07.874419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:09.878080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:09.885150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa] <==
	I1216 03:06:15.624386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:06:45.629260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165: exit status 2 (325.618559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-079165
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-079165:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	        "Created": "2025-12-16T03:05:00.382441166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:06:05.931740053Z",
	            "FinishedAt": "2025-12-16T03:06:04.95986793Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/hosts",
	        "LogPath": "/var/lib/docker/containers/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7/17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7-json.log",
	        "Name": "/default-k8s-diff-port-079165",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-079165:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-079165",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17c3b6c10d0d675a22394b4bc96b02dc44a57da985c774f61fd5c2570768fad7",
	                "LowerDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2f628685f755b399332f3f35c6224bdcb22f9369f4ccff48e7e806876bb3db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-079165",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-079165/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-079165",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-079165",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f98c8ad465e96a5bb94e95c9a2dab8d58d3b7fcd070abbb2ca5340ebba9f0dae",
	            "SandboxKey": "/var/run/docker/netns/f98c8ad465e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-079165": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5282d64d27b5a2514f04f90d1cd32aa132a110f71ffb368ba477ac385094fbb2",
	                    "EndpointID": "183aa249c553afc117a658f69c4fef51b4216f3c39683119791d3664a723a257",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5e:b5:ae:a8:cf:b1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-079165",
	                        "17c3b6c10d0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165: exit status 2 (326.38105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-079165 logs -n 25: (1.098841907s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-073001 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ stop    │ -p newest-cni-991316 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ image   │ no-preload-307185 image list --format=json                                                                                                                                                                                                           │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p no-preload-307185 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p old-k8s-version-073001                                                                                                                                                                                                                            │ old-k8s-version-073001       │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p disable-driver-mounts-899443                                                                                                                                                                                                                      │ disable-driver-mounts-899443 │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p no-preload-307185                                                                                                                                                                                                                                 │ no-preload-307185            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:07 UTC │
	│ image   │ newest-cni-991316 image list --format=json                                                                                                                                                                                                           │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ pause   │ -p newest-cni-991316 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ delete  │ -p newest-cni-991316                                                                                                                                                                                                                                 │ newest-cni-991316            │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │ 16 Dec 25 03:06 UTC │
	│ start   │ -p kindnet-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-646016               │ jenkins │ v1.37.0 │ 16 Dec 25 03:06 UTC │                     │
	│ ssh     │ -p auto-646016 pgrep -a kubelet                                                                                                                                                                                                                      │ auto-646016                  │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-742794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ image   │ default-k8s-diff-port-079165 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ pause   │ -p default-k8s-diff-port-079165 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-079165 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ stop    │ -p embed-certs-742794 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-742794           │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:06:36.912506  311649 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:06:36.912641  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912649  311649 out.go:374] Setting ErrFile to fd 2...
	I1216 03:06:36.912656  311649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:06:36.912959  311649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:06:36.914248  311649 out.go:368] Setting JSON to false
	I1216 03:06:36.915985  311649 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2949,"bootTime":1765851448,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:06:36.916062  311649 start.go:143] virtualization: kvm guest
	I1216 03:06:36.918316  311649 out.go:179] * [kindnet-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:06:36.921321  311649 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:06:36.921324  311649 notify.go:221] Checking for updates...
	I1216 03:06:36.926057  311649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:06:36.934596  311649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:36.937150  311649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:06:36.938685  311649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:06:36.940325  311649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:06:36.943016  311649 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943170  311649 config.go:182] Loaded profile config "default-k8s-diff-port-079165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943308  311649 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:36.943452  311649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:06:36.974200  311649 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:06:36.974308  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.059528  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.045598159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.059688  311649 docker.go:319] overlay module found
	I1216 03:06:37.062324  311649 out.go:179] * Using the docker driver based on user configuration
	I1216 03:06:37.064270  311649 start.go:309] selected driver: docker
	I1216 03:06:37.064290  311649 start.go:927] validating driver "docker" against <nil>
	I1216 03:06:37.064306  311649 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:06:37.065092  311649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:06:37.134587  311649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 03:06:37.120781191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:06:37.134868  311649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:06:37.135202  311649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:37.139979  311649 out.go:179] * Using Docker driver with root privileges
	I1216 03:06:37.141298  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:37.141320  311649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:06:37.141420  311649 start.go:353] cluster config:
	{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:37.142847  311649 out.go:179] * Starting "kindnet-646016" primary control-plane node in "kindnet-646016" cluster
	I1216 03:06:37.144033  311649 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:06:37.145214  311649 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:06:37.146273  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.146323  311649 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:06:37.146332  311649 cache.go:65] Caching tarball of preloaded images
	I1216 03:06:37.146381  311649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:06:37.146438  311649 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:06:37.146451  311649 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:06:37.146582  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:37.146609  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json: {Name:mka01fc2d87dd258e9e4215769fc0defca835ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:37.173960  311649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:06:37.174000  311649 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:06:37.174018  311649 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:06:37.174056  311649 start.go:360] acquireMachinesLock for kindnet-646016: {Name:mk5e982439fb31b21f2bf0f14b638469610e2ecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:06:37.174175  311649 start.go:364] duration metric: took 97.838µs to acquireMachinesLock for "kindnet-646016"
	I1216 03:06:37.174206  311649 start.go:93] Provisioning new machine with config: &{Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:37.174307  311649 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:06:32.289938  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.297659  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:32.306317  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310169  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.310225  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:32.358310  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:32.366800  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:32.374925  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.382691  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:32.390401  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394611  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.394661  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:32.433920  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:32.442904  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:32.452551  305678 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.460567  305678 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:32.468254  305678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472142  305678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.472194  305678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:32.512960  305678 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.521828  305678 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:32.531306  305678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:32.535264  305678 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:32.535327  305678 kubeadm.go:401] StartCluster: {Name:auto-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:32.535422  305678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:32.535487  305678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:32.570545  305678 cri.go:89] found id: ""
	I1216 03:06:32.570617  305678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:32.580361  305678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:32.590036  305678 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:32.590101  305678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:32.600310  305678 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:32.600328  305678 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:32.600380  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:32.611364  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:32.611434  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:32.621528  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:32.630592  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:32.630691  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:32.639135  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.647615  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:32.647672  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:32.655556  305678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:32.663704  305678 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:32.663751  305678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:32.671103  305678 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:32.732749  305678 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:32.798205  305678 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:36.811045  301866 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.343509782s
	I1216 03:06:37.324341  301866 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.856445935s
	I1216 03:06:38.970006  301866 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502495893s
	I1216 03:06:38.987567  301866 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:38.999896  301866 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:39.008632  301866 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:39.008951  301866 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-742794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:39.018346  301866 kubeadm.go:319] [bootstrap-token] Using token: jt3t6c.ftosdk62dr4hq8nx
	I1216 03:06:39.020229  301866 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:39.020406  301866 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:39.023717  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:39.030138  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:39.032812  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:39.035589  301866 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:39.040407  301866 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:39.376310  301866 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:39.798064  301866 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:40.387055  301866 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:40.388094  301866 kubeadm.go:319] 
	I1216 03:06:40.388196  301866 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:40.388227  301866 kubeadm.go:319] 
	I1216 03:06:40.388343  301866 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:40.388356  301866 kubeadm.go:319] 
	I1216 03:06:40.388385  301866 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:40.388525  301866 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:40.388619  301866 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:40.388630  301866 kubeadm.go:319] 
	I1216 03:06:40.388735  301866 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:40.388751  301866 kubeadm.go:319] 
	I1216 03:06:40.388846  301866 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:40.388859  301866 kubeadm.go:319] 
	I1216 03:06:40.388922  301866 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:40.388986  301866 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:40.389039  301866 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:40.389047  301866 kubeadm.go:319] 
	I1216 03:06:40.389159  301866 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:40.389224  301866 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:40.389230  301866 kubeadm.go:319] 
	I1216 03:06:40.389294  301866 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389377  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:40.389395  301866 kubeadm.go:319] 	--control-plane 
	I1216 03:06:40.389400  301866 kubeadm.go:319] 
	I1216 03:06:40.389478  301866 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:40.389487  301866 kubeadm.go:319] 
	I1216 03:06:40.389595  301866 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jt3t6c.ftosdk62dr4hq8nx \
	I1216 03:06:40.389778  301866 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:40.392758  301866 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:40.392974  301866 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:40.393004  301866 cni.go:84] Creating CNI manager for ""
	I1216 03:06:40.393011  301866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:40.488426  301866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:37.030102  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:39.526744  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:37.176299  311649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:06:37.176572  311649 start.go:159] libmachine.API.Create for "kindnet-646016" (driver="docker")
	I1216 03:06:37.176609  311649 client.go:173] LocalClient.Create starting
	I1216 03:06:37.176683  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:06:37.176734  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176758  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.176868  311649 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:06:37.176934  311649 main.go:143] libmachine: Decoding PEM data...
	I1216 03:06:37.176955  311649 main.go:143] libmachine: Parsing certificate...
	I1216 03:06:37.177346  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:06:37.198035  311649 cli_runner.go:211] docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:06:37.198117  311649 network_create.go:284] running [docker network inspect kindnet-646016] to gather additional debugging logs...
	I1216 03:06:37.198140  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016
	W1216 03:06:37.217351  311649 cli_runner.go:211] docker network inspect kindnet-646016 returned with exit code 1
	I1216 03:06:37.217385  311649 network_create.go:287] error running [docker network inspect kindnet-646016]: docker network inspect kindnet-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-646016 not found
	I1216 03:06:37.217404  311649 network_create.go:289] output of [docker network inspect kindnet-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-646016 not found
	
	** /stderr **
	I1216 03:06:37.217553  311649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:37.239137  311649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:06:37.240088  311649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:06:37.241036  311649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:06:37.242047  311649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7bbc0}
	I1216 03:06:37.242076  311649 network_create.go:124] attempt to create docker network kindnet-646016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:06:37.242129  311649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-646016 kindnet-646016
	I1216 03:06:37.303813  311649 network_create.go:108] docker network kindnet-646016 192.168.76.0/24 created
	I1216 03:06:37.303878  311649 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-646016" container
	I1216 03:06:37.303960  311649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:06:37.326233  311649 cli_runner.go:164] Run: docker volume create kindnet-646016 --label name.minikube.sigs.k8s.io=kindnet-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:06:37.345781  311649 oci.go:103] Successfully created a docker volume kindnet-646016
	I1216 03:06:37.345884  311649 cli_runner.go:164] Run: docker run --rm --name kindnet-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --entrypoint /usr/bin/test -v kindnet-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:06:37.826587  311649 oci.go:107] Successfully prepared a docker volume kindnet-646016
	I1216 03:06:37.826662  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:37.826680  311649 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:06:37.826753  311649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:06:42.492370  305678 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:06:42.492457  305678 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:06:42.492585  305678 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:06:42.492655  305678 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:06:42.492702  305678 kubeadm.go:319] OS: Linux
	I1216 03:06:42.492792  305678 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:06:42.492885  305678 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:06:42.492953  305678 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:06:42.493065  305678 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:06:42.493139  305678 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:06:42.493206  305678 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:06:42.493274  305678 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:06:42.493336  305678 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:06:42.493440  305678 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:06:42.493521  305678 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:06:42.493648  305678 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:06:42.493769  305678 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:06:42.494971  305678 out.go:252]   - Generating certificates and keys ...
	I1216 03:06:42.495073  305678 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:06:42.495136  305678 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:06:42.495239  305678 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:06:42.495320  305678 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:06:42.495390  305678 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:06:42.495471  305678 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:06:42.495555  305678 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:06:42.495710  305678 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.495789  305678 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:06:42.495956  305678 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-646016 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 03:06:42.496049  305678 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:06:42.496141  305678 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:06:42.496209  305678 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:06:42.496297  305678 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:06:42.496386  305678 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:06:42.496480  305678 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:06:42.496551  305678 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:06:42.496644  305678 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:06:42.496722  305678 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:06:42.496861  305678 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:06:42.496960  305678 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:06:42.498424  305678 out.go:252]   - Booting up control plane ...
	I1216 03:06:42.498537  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:06:42.498665  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:06:42.498728  305678 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:06:42.498847  305678 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:06:42.498988  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:06:42.499152  305678 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:06:42.499290  305678 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:06:42.499345  305678 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:06:42.499657  305678 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:06:42.499788  305678 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:06:42.499885  305678 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.112324ms
	I1216 03:06:42.500041  305678 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:06:42.500173  305678 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 03:06:42.500323  305678 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:06:42.500442  305678 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:06:42.500546  305678 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.175318386s
	I1216 03:06:42.500649  305678 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.4004376s
	I1216 03:06:42.500732  305678 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501249222s
	I1216 03:06:42.500884  305678 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:06:42.501003  305678 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:06:42.501081  305678 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:06:42.501327  305678 kubeadm.go:319] [mark-control-plane] Marking the node auto-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:06:42.501376  305678 kubeadm.go:319] [bootstrap-token] Using token: lvkpe0.dg8z2fbad7xa25ob
	I1216 03:06:42.502851  305678 out.go:252]   - Configuring RBAC rules ...
	I1216 03:06:42.502987  305678 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:06:42.503101  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:06:42.503288  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:06:42.503482  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:06:42.503640  305678 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:06:42.503758  305678 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:06:42.503965  305678 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:06:42.504037  305678 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:06:42.504108  305678 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:06:42.504119  305678 kubeadm.go:319] 
	I1216 03:06:42.504203  305678 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:06:42.504215  305678 kubeadm.go:319] 
	I1216 03:06:42.504329  305678 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:06:42.504345  305678 kubeadm.go:319] 
	I1216 03:06:42.504395  305678 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:06:42.504479  305678 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:06:42.504568  305678 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:06:42.504579  305678 kubeadm.go:319] 
	I1216 03:06:42.504668  305678 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:06:42.504683  305678 kubeadm.go:319] 
	I1216 03:06:42.504765  305678 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:06:42.504775  305678 kubeadm.go:319] 
	I1216 03:06:42.504864  305678 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:06:42.504998  305678 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:06:42.505082  305678 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:06:42.505091  305678 kubeadm.go:319] 
	I1216 03:06:42.505215  305678 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:06:42.505315  305678 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:06:42.505323  305678 kubeadm.go:319] 
	I1216 03:06:42.505423  305678 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505558  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:06:42.505584  305678 kubeadm.go:319] 	--control-plane 
	I1216 03:06:42.505592  305678 kubeadm.go:319] 
	I1216 03:06:42.505680  305678 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:06:42.505686  305678 kubeadm.go:319] 
	I1216 03:06:42.505749  305678 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lvkpe0.dg8z2fbad7xa25ob \
	I1216 03:06:42.505864  305678 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:06:42.505877  305678 cni.go:84] Creating CNI manager for ""
	I1216 03:06:42.505884  305678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 03:06:42.507282  305678 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 03:06:40.556500  301866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:40.561584  301866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:40.561613  301866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:40.577774  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:41.613918  301866 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.036089237s)
	I1216 03:06:41.613972  301866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:41.614150  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:41.614173  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-742794 minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=embed-certs-742794 minikube.k8s.io/primary=true
	I1216 03:06:41.626342  301866 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:41.845142  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.345943  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.845105  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.345902  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.845135  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.345102  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.846051  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.345989  301866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.416661  301866 kubeadm.go:1114] duration metric: took 3.802575761s to wait for elevateKubeSystemPrivileges
	I1216 03:06:45.416708  301866 kubeadm.go:403] duration metric: took 16.875245445s to StartCluster
	I1216 03:06:45.416731  301866 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.416953  301866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:45.418953  301866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:45.419173  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:45.419182  301866 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:45.419261  301866 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:45.419359  301866 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-742794"
	I1216 03:06:45.419381  301866 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-742794"
	I1216 03:06:45.419396  301866 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:45.419414  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.419459  301866 addons.go:70] Setting default-storageclass=true in profile "embed-certs-742794"
	I1216 03:06:45.419480  301866 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-742794"
	I1216 03:06:45.419894  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.420161  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.424569  301866 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:45.425946  301866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:45.449105  301866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1216 03:06:42.026493  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:44.525591  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:45.450234  301866 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.450254  301866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:45.450315  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.450918  301866 addons.go:239] Setting addon default-storageclass=true in "embed-certs-742794"
	I1216 03:06:45.451884  301866 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:06:45.452391  301866 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:06:45.474794  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.477242  301866 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.477258  301866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:45.477348  301866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:06:45.507412  301866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:06:45.532004  301866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:45.601352  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:45.618429  301866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:45.642176  301866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:45.751205  301866 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:45.926484  301866 node_ready.go:35] waiting up to 6m0s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:45.931875  301866 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:42.187278  311649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.360463421s)
	I1216 03:06:42.187316  311649 kic.go:203] duration metric: took 4.360631679s to extract preloaded images to volume ...
	W1216 03:06:42.187436  311649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:06:42.187482  311649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:06:42.187655  311649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:06:42.264475  311649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-646016 --name kindnet-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-646016 --network kindnet-646016 --ip 192.168.76.2 --volume kindnet-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:06:42.589318  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Running}}
	I1216 03:06:42.613344  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.636793  311649 cli_runner.go:164] Run: docker exec kindnet-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:06:42.692951  311649 oci.go:144] the created container "kindnet-646016" has a running status.
	I1216 03:06:42.693027  311649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa...
	I1216 03:06:42.723209  311649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:06:42.759298  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.788064  311649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:06:42.788107  311649 kic_runner.go:114] Args: [docker exec --privileged kindnet-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:06:42.841532  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:06:42.870136  311649 machine.go:94] provisionDockerMachine start ...
	I1216 03:06:42.870241  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:42.900132  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:42.900484  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:42.900507  311649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:06:42.901354  311649 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55522->127.0.0.1:33109: read: connection reset by peer
	I1216 03:06:46.051362  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.051391  311649 ubuntu.go:182] provisioning hostname "kindnet-646016"
	I1216 03:06:46.051471  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.071710  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.072035  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.072054  311649 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-646016 && echo "kindnet-646016" | sudo tee /etc/hostname
	I1216 03:06:46.229313  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-646016
	
	I1216 03:06:46.229390  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.250802  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.251099  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.251120  311649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:06:46.394197  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:06:46.394227  311649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:06:46.394258  311649 ubuntu.go:190] setting up certificates
	I1216 03:06:46.394271  311649 provision.go:84] configureAuth start
	I1216 03:06:46.394331  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:46.416666  311649 provision.go:143] copyHostCerts
	I1216 03:06:46.416740  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:06:46.416755  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:06:46.416885  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:06:46.417042  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:06:46.417058  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:06:46.417120  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:06:46.417250  311649 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:06:46.417265  311649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:06:46.417314  311649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:06:46.417441  311649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.kindnet-646016 san=[127.0.0.1 192.168.76.2 kindnet-646016 localhost minikube]
	I1216 03:06:46.669146  311649 provision.go:177] copyRemoteCerts
	I1216 03:06:46.669199  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:06:46.669229  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.689779  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:46.791881  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:06:46.813593  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:06:46.832367  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 03:06:46.850692  311649 provision.go:87] duration metric: took 456.406984ms to configureAuth
	I1216 03:06:46.850726  311649 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:06:46.850934  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.851035  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:46.871285  311649 main.go:143] libmachine: Using SSH client type: native
	I1216 03:06:46.871493  311649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1216 03:06:46.871507  311649 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:06:42.508558  305678 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:06:42.513406  305678 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:06:42.513425  305678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:06:42.529253  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:06:42.791486  305678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:06:42.791569  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:42.791628  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-646016 minikube.k8s.io/updated_at=2025_12_16T03_06_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-646016 minikube.k8s.io/primary=true
	I1216 03:06:42.804265  305678 ops.go:34] apiserver oom_adj: -16
	I1216 03:06:42.902143  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.402756  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:43.903006  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.402268  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:44.902852  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.403072  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:45.902749  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.403233  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.902362  305678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:06:46.975382  305678 kubeadm.go:1114] duration metric: took 4.183882801s to wait for elevateKubeSystemPrivileges
	I1216 03:06:46.975415  305678 kubeadm.go:403] duration metric: took 14.440090912s to StartCluster
	I1216 03:06:46.975437  305678 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.975508  305678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:06:46.977140  305678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:46.977403  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:06:46.977404  305678 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:06:46.977486  305678 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:06:46.977579  305678 addons.go:70] Setting storage-provisioner=true in profile "auto-646016"
	I1216 03:06:46.977599  305678 addons.go:70] Setting default-storageclass=true in profile "auto-646016"
	I1216 03:06:46.977606  305678 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:06:46.977650  305678 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-646016"
	I1216 03:06:46.977607  305678 addons.go:239] Setting addon storage-provisioner=true in "auto-646016"
	I1216 03:06:46.977743  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:46.978050  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.978306  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:46.982308  305678 out.go:179] * Verifying Kubernetes components...
	I1216 03:06:46.983620  305678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:47.002437  305678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:06:47.002605  305678 addons.go:239] Setting addon default-storageclass=true in "auto-646016"
	I1216 03:06:47.002668  305678 host.go:66] Checking if "auto-646016" exists ...
	I1216 03:06:47.003259  305678 cli_runner.go:164] Run: docker container inspect auto-646016 --format={{.State.Status}}
	I1216 03:06:47.003564  305678 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.003579  305678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:06:47.003634  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.035685  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.038358  305678 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.038384  305678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:06:47.038454  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-646016
	I1216 03:06:47.063766  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/auto-646016/id_rsa Username:docker}
	I1216 03:06:47.081171  305678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:06:47.136199  305678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:47.154654  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:06:47.183681  305678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:06:47.284544  305678 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:06:47.286230  305678 node_ready.go:35] waiting up to 15m0s for node "auto-646016" to be "Ready" ...
	I1216 03:06:47.496268  305678 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:06:47.193617  311649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:06:47.194051  311649 machine.go:97] duration metric: took 4.323568124s to provisionDockerMachine
	I1216 03:06:47.194092  311649 client.go:176] duration metric: took 10.017462228s to LocalClient.Create
	I1216 03:06:47.194125  311649 start.go:167] duration metric: took 10.017552786s to libmachine.API.Create "kindnet-646016"
	I1216 03:06:47.194137  311649 start.go:293] postStartSetup for "kindnet-646016" (driver="docker")
	I1216 03:06:47.194157  311649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:06:47.194247  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:06:47.194306  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.220949  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.335239  311649 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:06:47.339735  311649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:06:47.339764  311649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:06:47.339779  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:06:47.339871  311649 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:06:47.339980  311649 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:06:47.340094  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:06:47.348131  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:47.372022  311649 start.go:296] duration metric: took 177.869291ms for postStartSetup
	I1216 03:06:47.372443  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.397221  311649 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/config.json ...
	I1216 03:06:47.397550  311649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:06:47.397606  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.415859  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.518022  311649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:06:47.523427  311649 start.go:128] duration metric: took 10.349106383s to createHost
	I1216 03:06:47.523456  311649 start.go:83] releasing machines lock for "kindnet-646016", held for 10.349266687s
	I1216 03:06:47.523530  311649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-646016
	I1216 03:06:47.546521  311649 ssh_runner.go:195] Run: cat /version.json
	I1216 03:06:47.546578  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.546599  311649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:06:47.546669  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:06:47.570313  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.570302  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:06:47.721354  311649 ssh_runner.go:195] Run: systemctl --version
	I1216 03:06:47.728115  311649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:06:47.764096  311649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:06:47.769332  311649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:06:47.769416  311649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:06:47.800234  311649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:06:47.800264  311649 start.go:496] detecting cgroup driver to use...
	I1216 03:06:47.800299  311649 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:06:47.800346  311649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:06:47.816262  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:06:47.828857  311649 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:06:47.828917  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:06:47.846000  311649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:06:47.864948  311649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:06:47.954521  311649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:06:48.052042  311649 docker.go:234] disabling docker service ...
	I1216 03:06:48.052109  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:06:48.070097  311649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:06:48.084175  311649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:06:48.172571  311649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:06:48.260483  311649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:06:48.273064  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:06:48.287395  311649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:06:48.287445  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.299225  311649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:06:48.299303  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.308963  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.318151  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.326922  311649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:06:48.336676  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.346533  311649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.363190  311649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:06:48.372458  311649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:06:48.380763  311649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:06:48.388403  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.471564  311649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:06:48.611303  311649 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:06:48.611368  311649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:06:48.615409  311649 start.go:564] Will wait 60s for crictl version
	I1216 03:06:48.615453  311649 ssh_runner.go:195] Run: which crictl
	I1216 03:06:48.619372  311649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:06:48.644746  311649 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:06:48.644839  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.673737  311649 ssh_runner.go:195] Run: crio --version
	I1216 03:06:48.702915  311649 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:06:45.932861  301866 addons.go:530] duration metric: took 513.595889ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:46.256598  301866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-742794" context rescaled to 1 replicas
	W1216 03:06:47.929443  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:49.930144  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:46.525728  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:48.526032  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	W1216 03:06:50.526076  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:48.704147  311649 cli_runner.go:164] Run: docker network inspect kindnet-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:06:48.721392  311649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:06:48.725738  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.736033  311649 kubeadm.go:884] updating cluster {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:06:48.736149  311649 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:06:48.736193  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.766912  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.766931  311649 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:06:48.766981  311649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:06:48.793469  311649 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:06:48.793488  311649 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:06:48.793496  311649 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 03:06:48.793584  311649 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 03:06:48.793668  311649 ssh_runner.go:195] Run: crio config
	I1216 03:06:48.842069  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:06:48.842093  311649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:06:48.842113  311649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-646016 NodeName:kindnet-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:06:48.842278  311649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:06:48.842350  311649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:06:48.851041  311649 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:06:48.851093  311649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:06:48.859976  311649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 03:06:48.873334  311649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:06:48.888764  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 03:06:48.901633  311649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:06:48.905305  311649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:06:48.915330  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:06:48.995098  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:06:49.027736  311649 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016 for IP: 192.168.76.2
	I1216 03:06:49.027754  311649 certs.go:195] generating shared ca certs ...
	I1216 03:06:49.027769  311649 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.027940  311649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:06:49.027991  311649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:06:49.027999  311649 certs.go:257] generating profile certs ...
	I1216 03:06:49.028050  311649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key
	I1216 03:06:49.028069  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt with IP's: []
	I1216 03:06:49.358443  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt ...
	I1216 03:06:49.358470  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.crt: {Name:mkd8b5e5f321efa7e9844310e79db14d2c69cdf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358640  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key ...
	I1216 03:06:49.358651  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/client.key: {Name:mk0a2ea2343a207eb4a3896019c7d6511f76de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.358724  311649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97
	I1216 03:06:49.358739  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:06:49.547719  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 ...
	I1216 03:06:49.547746  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97: {Name:mk0ea02365886ae096b9e5de77c47711b9643fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.547929  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 ...
	I1216 03:06:49.547944  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97: {Name:mke361247d57cd7cd2fc7dc06040d57afdcb0c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.548042  311649 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt
	I1216 03:06:49.548133  311649 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key.98913f97 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key
	I1216 03:06:49.548195  311649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key
	I1216 03:06:49.548210  311649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt with IP's: []
	I1216 03:06:49.631433  311649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt ...
	I1216 03:06:49.631466  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt: {Name:mkad790b016b1279eb196a1c4cb8b1281ceb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631654  311649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key ...
	I1216 03:06:49.631672  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key: {Name:mk3cdaee6c7ccfd128b07eb42506350a5c451ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:06:49.631986  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:06:49.632029  311649 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:06:49.632038  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:06:49.632063  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:06:49.632086  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:06:49.632113  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:06:49.632153  311649 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:06:49.632685  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:06:49.654165  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:06:49.673281  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:06:49.691565  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:06:49.710072  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:06:49.727451  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:06:49.745210  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:06:49.762460  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kindnet-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:06:49.779547  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:06:49.799563  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:06:49.817781  311649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:06:49.836953  311649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:06:49.850440  311649 ssh_runner.go:195] Run: openssl version
	I1216 03:06:49.857619  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.865683  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:06:49.873871  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877892  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.877973  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:06:49.913978  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:06:49.921860  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:06:49.930065  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.937793  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:06:49.945255  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949134  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.949180  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:06:49.985209  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:06:49.993787  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:06:50.002243  311649 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.011462  311649 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:06:50.019433  311649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023581  311649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.023639  311649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:06:50.058987  311649 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.067234  311649 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:06:50.074980  311649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:06:50.078979  311649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:06:50.079045  311649 kubeadm.go:401] StartCluster: {Name:kindnet-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:06:50.079128  311649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:06:50.079165  311649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:06:50.107017  311649 cri.go:89] found id: ""
	I1216 03:06:50.107074  311649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:06:50.115787  311649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:06:50.124409  311649 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:06:50.124473  311649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:06:50.132370  311649 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:06:50.132387  311649 kubeadm.go:158] found existing configuration files:
	
	I1216 03:06:50.132436  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:06:50.140621  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:06:50.140678  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:06:50.148112  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:06:50.155314  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:06:50.155365  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:06:50.163444  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.172463  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:06:50.172506  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:06:50.181207  311649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:06:50.189958  311649 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:06:50.190008  311649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:06:50.198269  311649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:06:50.259675  311649 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:06:50.322773  311649 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:06:47.498757  305678 addons.go:530] duration metric: took 521.268615ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:06:47.789777  305678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-646016" context rescaled to 1 replicas
	W1216 03:06:49.289937  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:51.290033  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:52.430325  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:54.929684  301866 node_ready.go:57] node "embed-certs-742794" has "Ready":"False" status (will retry)
	W1216 03:06:53.025423  296715 pod_ready.go:104] pod "coredns-66bc5c9577-xndlx" is not "Ready", error: <nil>
	I1216 03:06:54.025688  296715 pod_ready.go:94] pod "coredns-66bc5c9577-xndlx" is "Ready"
	I1216 03:06:54.025718  296715 pod_ready.go:86] duration metric: took 37.505799828s for pod "coredns-66bc5c9577-xndlx" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.028581  296715 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.032600  296715 pod_ready.go:94] pod "etcd-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.032625  296715 pod_ready.go:86] duration metric: took 4.021316ms for pod "etcd-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.034486  296715 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.038375  296715 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.038397  296715 pod_ready.go:86] duration metric: took 3.88453ms for pod "kube-apiserver-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.042484  296715 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.223347  296715 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:54.223380  296715 pod_ready.go:86] duration metric: took 180.875268ms for pod "kube-controller-manager-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.423344  296715 pod_ready.go:83] waiting for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:54.823755  296715 pod_ready.go:94] pod "kube-proxy-2g6tn" is "Ready"
	I1216 03:06:54.823786  296715 pod_ready.go:86] duration metric: took 400.418478ms for pod "kube-proxy-2g6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.023768  296715 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423515  296715 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-079165" is "Ready"
	I1216 03:06:55.423544  296715 pod_ready.go:86] duration metric: took 399.751113ms for pod "kube-scheduler-default-k8s-diff-port-079165" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:55.423557  296715 pod_ready.go:40] duration metric: took 38.907102315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:55.468787  296715 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:55.471584  296715 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-079165" cluster and "default" namespace by default
	W1216 03:06:53.290307  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	W1216 03:06:55.789926  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:56.429615  301866 node_ready.go:49] node "embed-certs-742794" is "Ready"
	I1216 03:06:56.429647  301866 node_ready.go:38] duration metric: took 10.503121729s for node "embed-certs-742794" to be "Ready" ...
	I1216 03:06:56.429666  301866 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:56.429726  301866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:56.442056  301866 api_server.go:72] duration metric: took 11.022842819s to wait for apiserver process to appear ...
	I1216 03:06:56.442082  301866 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:56.442103  301866 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 03:06:56.447056  301866 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 03:06:56.448029  301866 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:56.448055  301866 api_server.go:131] duration metric: took 5.963373ms to wait for apiserver health ...
	I1216 03:06:56.448066  301866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:56.451399  301866 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:56.451426  301866 system_pods.go:61] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.451432  301866 system_pods.go:61] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.451438  301866 system_pods.go:61] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.451444  301866 system_pods.go:61] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.451448  301866 system_pods.go:61] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.451451  301866 system_pods.go:61] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.451455  301866 system_pods.go:61] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.451461  301866 system_pods.go:61] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.451468  301866 system_pods.go:74] duration metric: took 3.397556ms to wait for pod list to return data ...
	I1216 03:06:56.451480  301866 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:56.453702  301866 default_sa.go:45] found service account: "default"
	I1216 03:06:56.453730  301866 default_sa.go:55] duration metric: took 2.242699ms for default service account to be created ...
	I1216 03:06:56.453737  301866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:56.456453  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.456483  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.456491  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.456499  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.456505  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.456511  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.456517  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.456522  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.456533  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.456552  301866 retry.go:31] will retry after 190.871511ms: missing components: kube-dns
	I1216 03:06:56.652497  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.652527  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.652533  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.652539  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.652545  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.652551  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.652556  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.652561  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.652569  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.652589  301866 retry.go:31] will retry after 263.135615ms: missing components: kube-dns
	I1216 03:06:56.920090  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:56.920129  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:56.920138  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:56.920147  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:56.920153  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:56.920160  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:56.920165  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:56.920175  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:56.920188  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:56.920211  301866 retry.go:31] will retry after 424.081703ms: missing components: kube-dns
	I1216 03:06:57.348588  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.348624  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:57.348633  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.348641  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.348647  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.348652  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.348697  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.348727  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.348738  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:57.348759  301866 retry.go:31] will retry after 548.921416ms: missing components: kube-dns
	I1216 03:06:57.902738  301866 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:57.902773  301866 system_pods.go:89] "coredns-66bc5c9577-rz62v" [b3431f40-12b9-40af-b117-2d33d57e2306] Running
	I1216 03:06:57.902782  301866 system_pods.go:89] "etcd-embed-certs-742794" [b6b4c277-e8eb-4528-9a85-6d45ad2dc26b] Running
	I1216 03:06:57.902787  301866 system_pods.go:89] "kindnet-7vrj8" [7d52f247-e00a-4271-86d6-a86423271e2c] Running
	I1216 03:06:57.902793  301866 system_pods.go:89] "kube-apiserver-embed-certs-742794" [a06b98ae-aef3-4fe6-8710-87e149b2788c] Running
	I1216 03:06:57.902799  301866 system_pods.go:89] "kube-controller-manager-embed-certs-742794" [e28f5890-0bd9-4785-bc73-bf41d1d24cd5] Running
	I1216 03:06:57.902804  301866 system_pods.go:89] "kube-proxy-899tv" [b6750b5a-5904-46bb-bf98-7de6de239ee1] Running
	I1216 03:06:57.902809  301866 system_pods.go:89] "kube-scheduler-embed-certs-742794" [03013fce-1f44-4b15-bd87-d08c8ab2628d] Running
	I1216 03:06:57.902814  301866 system_pods.go:89] "storage-provisioner" [c4b740db-5b49-4331-ad97-1e4ba4180f9e] Running
	I1216 03:06:57.902854  301866 system_pods.go:126] duration metric: took 1.449111047s to wait for k8s-apps to be running ...
	I1216 03:06:57.902864  301866 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:57.902920  301866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:57.918800  301866 system_svc.go:56] duration metric: took 15.925631ms WaitForService to wait for kubelet
	I1216 03:06:57.918845  301866 kubeadm.go:587] duration metric: took 12.499634394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:57.918867  301866 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:57.922077  301866 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:57.922106  301866 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:57.922129  301866 node_conditions.go:105] duration metric: took 3.256352ms to run NodePressure ...
	I1216 03:06:57.922144  301866 start.go:242] waiting for startup goroutines ...
	I1216 03:06:57.922158  301866 start.go:247] waiting for cluster config update ...
	I1216 03:06:57.922174  301866 start.go:256] writing updated cluster config ...
	I1216 03:06:57.922469  301866 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:57.928097  301866 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:57.932548  301866 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.937661  301866 pod_ready.go:94] pod "coredns-66bc5c9577-rz62v" is "Ready"
	I1216 03:06:57.937691  301866 pod_ready.go:86] duration metric: took 5.118409ms for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.940008  301866 pod_ready.go:83] waiting for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.944367  301866 pod_ready.go:94] pod "etcd-embed-certs-742794" is "Ready"
	I1216 03:06:57.944388  301866 pod_ready.go:86] duration metric: took 4.358597ms for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.946807  301866 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.952672  301866 pod_ready.go:94] pod "kube-apiserver-embed-certs-742794" is "Ready"
	I1216 03:06:57.952695  301866 pod_ready.go:86] duration metric: took 5.836334ms for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:57.954866  301866 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.333247  301866 pod_ready.go:94] pod "kube-controller-manager-embed-certs-742794" is "Ready"
	I1216 03:06:58.333274  301866 pod_ready.go:86] duration metric: took 378.387824ms for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.532264  301866 pod_ready.go:83] waiting for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:58.933597  301866 pod_ready.go:94] pod "kube-proxy-899tv" is "Ready"
	I1216 03:06:58.933622  301866 pod_ready.go:86] duration metric: took 401.335129ms for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.133550  301866 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532905  301866 pod_ready.go:94] pod "kube-scheduler-embed-certs-742794" is "Ready"
	I1216 03:06:59.532933  301866 pod_ready.go:86] duration metric: took 399.353784ms for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.532945  301866 pod_ready.go:40] duration metric: took 1.604812413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.576977  301866 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:06:59.578834  301866 out.go:179] * Done! kubectl is now configured to use "embed-certs-742794" cluster and "default" namespace by default
	I1216 03:07:00.734146  311649 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:07:00.734241  311649 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:07:00.734336  311649 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:07:00.734445  311649 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:07:00.734513  311649 kubeadm.go:319] OS: Linux
	I1216 03:07:00.734595  311649 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:07:00.734665  311649 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:07:00.734745  311649 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:07:00.734807  311649 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:07:00.734941  311649 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:07:00.735023  311649 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:07:00.735095  311649 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:07:00.735168  311649 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:07:00.735274  311649 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:07:00.735439  311649 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:07:00.735570  311649 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:07:00.735660  311649 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:07:00.737122  311649 out.go:252]   - Generating certificates and keys ...
	I1216 03:07:00.737200  311649 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:07:00.737281  311649 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:07:00.737346  311649 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:07:00.737403  311649 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:07:00.737487  311649 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:07:00.737563  311649 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:07:00.737637  311649 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:07:00.737781  311649 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.737858  311649 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:07:00.737979  311649 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:07:00.738058  311649 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:07:00.738150  311649 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:07:00.738205  311649 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:07:00.738283  311649 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:07:00.738376  311649 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:07:00.738446  311649 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:07:00.738501  311649 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:07:00.738579  311649 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:07:00.738633  311649 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:07:00.738736  311649 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:07:00.738800  311649 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:07:00.740287  311649 out.go:252]   - Booting up control plane ...
	I1216 03:07:00.740372  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:07:00.740438  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:07:00.740524  311649 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:07:00.740652  311649 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:07:00.740772  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:07:00.740946  311649 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:07:00.741073  311649 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:07:00.741126  311649 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:07:00.741278  311649 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:07:00.741401  311649 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:07:00.741468  311649 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.981193ms
	I1216 03:07:00.741568  311649 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:07:00.741715  311649 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:07:00.741810  311649 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:07:00.741982  311649 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:07:00.742095  311649 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.877570449s
	I1216 03:07:00.742199  311649 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.443366435s
	I1216 03:07:00.742292  311649 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001072727s
	I1216 03:07:00.742448  311649 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:07:00.742548  311649 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:07:00.742619  311649 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:07:00.742803  311649 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:07:00.742872  311649 kubeadm.go:319] [bootstrap-token] Using token: qf8hji.ax4hpzqgdccyhdsp
	I1216 03:07:00.744251  311649 out.go:252]   - Configuring RBAC rules ...
	I1216 03:07:00.744348  311649 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:07:00.744421  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:07:00.744557  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:07:00.744689  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:07:00.744849  311649 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:07:00.744950  311649 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:07:00.745043  311649 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:07:00.745086  311649 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:07:00.745140  311649 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:07:00.745153  311649 kubeadm.go:319] 
	I1216 03:07:00.745212  311649 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:07:00.745218  311649 kubeadm.go:319] 
	I1216 03:07:00.745298  311649 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:07:00.745308  311649 kubeadm.go:319] 
	I1216 03:07:00.745347  311649 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:07:00.745409  311649 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:07:00.745452  311649 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:07:00.745460  311649 kubeadm.go:319] 
	I1216 03:07:00.745522  311649 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:07:00.745539  311649 kubeadm.go:319] 
	I1216 03:07:00.745581  311649 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:07:00.745587  311649 kubeadm.go:319] 
	I1216 03:07:00.745630  311649 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:07:00.745694  311649 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:07:00.745766  311649 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:07:00.745773  311649 kubeadm.go:319] 
	I1216 03:07:00.745892  311649 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:07:00.745971  311649 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:07:00.745977  311649 kubeadm.go:319] 
	I1216 03:07:00.746075  311649 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746254  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:07:00.746296  311649 kubeadm.go:319] 	--control-plane 
	I1216 03:07:00.746311  311649 kubeadm.go:319] 
	I1216 03:07:00.746393  311649 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:07:00.746400  311649 kubeadm.go:319] 
	I1216 03:07:00.746491  311649 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qf8hji.ax4hpzqgdccyhdsp \
	I1216 03:07:00.746595  311649 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:07:00.746611  311649 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:07:00.748130  311649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1216 03:06:58.288855  305678 node_ready.go:57] node "auto-646016" has "Ready":"False" status (will retry)
	I1216 03:06:58.790092  305678 node_ready.go:49] node "auto-646016" is "Ready"
	I1216 03:06:58.790126  305678 node_ready.go:38] duration metric: took 11.503870198s for node "auto-646016" to be "Ready" ...
	I1216 03:06:58.790140  305678 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:06:58.790207  305678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:06:58.808029  305678 api_server.go:72] duration metric: took 11.830592066s to wait for apiserver process to appear ...
	I1216 03:06:58.808059  305678 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:06:58.808080  305678 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:06:58.815119  305678 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:06:58.816423  305678 api_server.go:141] control plane version: v1.34.2
	I1216 03:06:58.816504  305678 api_server.go:131] duration metric: took 8.436974ms to wait for apiserver health ...
	I1216 03:06:58.816533  305678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:06:58.821280  305678 system_pods.go:59] 8 kube-system pods found
	I1216 03:06:58.821368  305678 system_pods.go:61] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.821400  305678 system_pods.go:61] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.821419  305678 system_pods.go:61] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.821439  305678 system_pods.go:61] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.821456  305678 system_pods.go:61] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.821475  305678 system_pods.go:61] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.821485  305678 system_pods.go:61] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.821492  305678 system_pods.go:61] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.821502  305678 system_pods.go:74] duration metric: took 4.950516ms to wait for pod list to return data ...
	I1216 03:06:58.821546  305678 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:06:58.824059  305678 default_sa.go:45] found service account: "default"
	I1216 03:06:58.824080  305678 default_sa.go:55] duration metric: took 2.522405ms for default service account to be created ...
	I1216 03:06:58.824091  305678 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:06:58.827274  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:58.827304  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:58.827312  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:58.827321  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:58.827326  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:58.827331  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:58.827341  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:58.827347  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:58.827358  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:58.827393  305678 retry.go:31] will retry after 259.79372ms: missing components: kube-dns
	I1216 03:06:59.091902  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.091931  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:06:59.091936  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.091960  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.091965  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.091971  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.091976  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.091984  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.091991  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:06:59.092011  305678 retry.go:31] will retry after 323.360238ms: missing components: kube-dns
	I1216 03:06:59.419712  305678 system_pods.go:86] 8 kube-system pods found
	I1216 03:06:59.419750  305678 system_pods.go:89] "coredns-66bc5c9577-w7kfz" [e1b4abce-b743-42ac-b597-b1be751bccf1] Running
	I1216 03:06:59.419760  305678 system_pods.go:89] "etcd-auto-646016" [3ba89e12-e6af-416e-83ea-bdba635fda27] Running
	I1216 03:06:59.419766  305678 system_pods.go:89] "kindnet-pssxt" [48919fa4-0091-4b12-9b21-75b89a6eff9b] Running
	I1216 03:06:59.419782  305678 system_pods.go:89] "kube-apiserver-auto-646016" [9f13e8f4-18b4-4dc0-b844-def1b5b557f5] Running
	I1216 03:06:59.419793  305678 system_pods.go:89] "kube-controller-manager-auto-646016" [0b4b87b0-4e21-4931-ab9f-a30662e89ccb] Running
	I1216 03:06:59.419800  305678 system_pods.go:89] "kube-proxy-hwssz" [672191cc-97f9-4fc3-b1b6-6249f801526f] Running
	I1216 03:06:59.419815  305678 system_pods.go:89] "kube-scheduler-auto-646016" [ade64919-2b94-47ca-a79b-21b8a013ca02] Running
	I1216 03:06:59.419838  305678 system_pods.go:89] "storage-provisioner" [5bf3f625-598a-4853-b014-1cfabb3de60f] Running
	I1216 03:06:59.419849  305678 system_pods.go:126] duration metric: took 595.751665ms to wait for k8s-apps to be running ...
	I1216 03:06:59.419884  305678 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:06:59.419987  305678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:06:59.433260  305678 system_svc.go:56] duration metric: took 13.390186ms WaitForService to wait for kubelet
	I1216 03:06:59.433294  305678 kubeadm.go:587] duration metric: took 12.45586268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:06:59.433320  305678 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:06:59.436233  305678 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:06:59.436259  305678 node_conditions.go:123] node cpu capacity is 8
	I1216 03:06:59.436274  305678 node_conditions.go:105] duration metric: took 2.942077ms to run NodePressure ...
	I1216 03:06:59.436285  305678 start.go:242] waiting for startup goroutines ...
	I1216 03:06:59.436292  305678 start.go:247] waiting for cluster config update ...
	I1216 03:06:59.436331  305678 start.go:256] writing updated cluster config ...
	I1216 03:06:59.436568  305678 ssh_runner.go:195] Run: rm -f paused
	I1216 03:06:59.440748  305678 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:06:59.444513  305678 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.448521  305678 pod_ready.go:94] pod "coredns-66bc5c9577-w7kfz" is "Ready"
	I1216 03:06:59.448540  305678 pod_ready.go:86] duration metric: took 4.002957ms for pod "coredns-66bc5c9577-w7kfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.450549  305678 pod_ready.go:83] waiting for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.454198  305678 pod_ready.go:94] pod "etcd-auto-646016" is "Ready"
	I1216 03:06:59.454220  305678 pod_ready.go:86] duration metric: took 3.644632ms for pod "etcd-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.456274  305678 pod_ready.go:83] waiting for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.459897  305678 pod_ready.go:94] pod "kube-apiserver-auto-646016" is "Ready"
	I1216 03:06:59.459920  305678 pod_ready.go:86] duration metric: took 3.627374ms for pod "kube-apiserver-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.462673  305678 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:06:59.845697  305678 pod_ready.go:94] pod "kube-controller-manager-auto-646016" is "Ready"
	I1216 03:06:59.845724  305678 pod_ready.go:86] duration metric: took 383.032974ms for pod "kube-controller-manager-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.046236  305678 pod_ready.go:83] waiting for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.445412  305678 pod_ready.go:94] pod "kube-proxy-hwssz" is "Ready"
	I1216 03:07:00.445441  305678 pod_ready.go:86] duration metric: took 399.181443ms for pod "kube-proxy-hwssz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:00.645598  305678 pod_ready.go:83] waiting for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046668  305678 pod_ready.go:94] pod "kube-scheduler-auto-646016" is "Ready"
	I1216 03:07:01.046698  305678 pod_ready.go:86] duration metric: took 401.069816ms for pod "kube-scheduler-auto-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:07:01.046714  305678 pod_ready.go:40] duration metric: took 1.605935876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:07:01.100168  305678 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:07:01.102443  305678 out.go:179] * Done! kubectl is now configured to use "auto-646016" cluster and "default" namespace by default
	I1216 03:07:00.749233  311649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 03:07:00.753983  311649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:07:00.753999  311649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 03:07:00.769555  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:07:00.983333  311649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:07:00.983402  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:00.983420  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-646016 minikube.k8s.io/updated_at=2025_12_16T03_07_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=kindnet-646016 minikube.k8s.io/primary=true
	I1216 03:07:00.994666  311649 ops.go:34] apiserver oom_adj: -16
	I1216 03:07:01.075445  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:01.575786  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.076390  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:02.575611  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.075547  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:03.575755  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.076148  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:04.575753  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.075504  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:05.576052  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.076085  311649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:06.148434  311649 kubeadm.go:1114] duration metric: took 5.165094916s to wait for elevateKubeSystemPrivileges
	I1216 03:07:06.148465  311649 kubeadm.go:403] duration metric: took 16.069424018s to StartCluster
	I1216 03:07:06.148481  311649 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.148539  311649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:07:06.150375  311649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:06.150605  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:07:06.150611  311649 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:07:06.150712  311649 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:07:06.150851  311649 addons.go:70] Setting storage-provisioner=true in profile "kindnet-646016"
	I1216 03:07:06.150859  311649 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:06.150876  311649 addons.go:239] Setting addon storage-provisioner=true in "kindnet-646016"
	I1216 03:07:06.150888  311649 addons.go:70] Setting default-storageclass=true in profile "kindnet-646016"
	I1216 03:07:06.150909  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.150910  311649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-646016"
	I1216 03:07:06.151282  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.151441  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.152143  311649 out.go:179] * Verifying Kubernetes components...
	I1216 03:07:06.153565  311649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:07:06.176673  311649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:07:06.178156  311649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.178180  311649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:07:06.178249  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.179311  311649 addons.go:239] Setting addon default-storageclass=true in "kindnet-646016"
	I1216 03:07:06.179358  311649 host.go:66] Checking if "kindnet-646016" exists ...
	I1216 03:07:06.179811  311649 cli_runner.go:164] Run: docker container inspect kindnet-646016 --format={{.State.Status}}
	I1216 03:07:06.206511  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.210644  311649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.210666  311649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:07:06.210723  311649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-646016
	I1216 03:07:06.239999  311649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/kindnet-646016/id_rsa Username:docker}
	I1216 03:07:06.243953  311649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:07:06.320537  311649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:07:06.324892  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:07:06.358779  311649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:07:06.419573  311649 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 03:07:06.421111  311649 node_ready.go:35] waiting up to 15m0s for node "kindnet-646016" to be "Ready" ...
	I1216 03:07:06.616907  311649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:07:06.618138  311649 addons.go:530] duration metric: took 467.411759ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:07:06.924634  311649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-646016" context rescaled to 1 replicas
	W1216 03:07:08.424065  311649 node_ready.go:57] node "kindnet-646016" has "Ready":"False" status (will retry)
	W1216 03:07:10.424689  311649 node_ready.go:57] node "kindnet-646016" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 16 03:06:41 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:41.748007817Z" level=info msg="Started container" PID=1748 containerID=f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper id=37913eb5-bfa5-44de-afab-44a1a60d2949 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e983c3424c0f8f2c018d765fad8f3bf6cae711961033abbdc4fb7d1dca9884f6
	Dec 16 03:06:42 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:42.368075439Z" level=info msg="Removing container: cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45" id=0279ddc5-82bf-4173-8d9f-13a4b5a9325d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:42 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:42.377251268Z" level=info msg="Removed container cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=0279ddc5-82bf-4173-8d9f-13a4b5a9325d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.379119045Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dcc9a025-6092-4cf1-b87e-ccc3c6bff1f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38009357Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=983ff651-cf81-4e14-a826-d9574b872308 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38121106Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1891ec5b-2913-42e3-ad86-23e0fc6f17aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.381347465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.38577982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.385951188Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3af0fc3a33a2d3a46d59856fd43a675dd2f3723dff4f9ceccf1e4735543bf537/merged/etc/passwd: no such file or directory"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.385975612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3af0fc3a33a2d3a46d59856fd43a675dd2f3723dff4f9ceccf1e4735543bf537/merged/etc/group: no such file or directory"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.386192987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.427153131Z" level=info msg="Created container 6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9: kube-system/storage-provisioner/storage-provisioner" id=1891ec5b-2913-42e3-ad86-23e0fc6f17aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.42797118Z" level=info msg="Starting container: 6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9" id=3aa3c8e2-2bf9-4ca8-96a9-83d6a34ef0fc name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:06:46 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:06:46.430414722Z" level=info msg="Started container" PID=1762 containerID=6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9 description=kube-system/storage-provisioner/storage-provisioner id=3aa3c8e2-2bf9-4ca8-96a9-83d6a34ef0fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=414e8ac3bed89aa5672bd11b143e7e2f6de3690caaa4bba4977843ed83ae2ca3
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.236331982Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3897ca6f-2c79-4002-963c-370405a5ac9b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.23750281Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c096183f-cb79-4a30-88fe-db24b4424792 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.238526278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=b95cc50e-8c27-434b-a0c9-172667694e5b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.238676855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.243928988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.244359624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.277191596Z" level=info msg="Created container 9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=b95cc50e-8c27-434b-a0c9-172667694e5b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.27791959Z" level=info msg="Starting container: 9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916" id=eb203528-8ff4-491f-b014-28987dd48c87 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.2797455Z" level=info msg="Started container" PID=1800 containerID=9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper id=eb203528-8ff4-491f-b014-28987dd48c87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e983c3424c0f8f2c018d765fad8f3bf6cae711961033abbdc4fb7d1dca9884f6
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.428498364Z" level=info msg="Removing container: f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3" id=53180ffb-c8cf-4ad2-8814-1a9a5395e1d1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:07:04 default-k8s-diff-port-079165 crio[572]: time="2025-12-16T03:07:04.438839257Z" level=info msg="Removed container f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z/dashboard-metrics-scraper" id=53180ffb-c8cf-4ad2-8814-1a9a5395e1d1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	9e0a9aaa36217       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   e983c3424c0f8       dashboard-metrics-scraper-6ffb444bf9-rqq6z             kubernetes-dashboard
	6eebae3db0a17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   414e8ac3bed89       storage-provisioner                                    kube-system
	7b84397dc8626       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   be3a3a11415cd       kubernetes-dashboard-855c9754f9-s5jhg                  kubernetes-dashboard
	b8d4c9ffcedfa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago      Running             coredns                     0                   7d8e1a1ad8ab1       coredns-66bc5c9577-xndlx                               kube-system
	9ef7e22f5cd62       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago      Running             busybox                     1                   8c93e4944a79d       busybox                                                default
	e2bb736213932       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago      Exited              storage-provisioner         0                   414e8ac3bed89       storage-provisioner                                    kube-system
	670184db3f804       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago      Running             kindnet-cni                 0                   f1b56408d0141       kindnet-w5gmn                                          kube-system
	07671a687288f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           57 seconds ago      Running             kube-proxy                  0                   f2581db915191       kube-proxy-2g6tn                                       kube-system
	7f87e3c1123f6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   8be1ff7a1fd80       kube-scheduler-default-k8s-diff-port-079165            kube-system
	8c44d80f00165       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   2f35eae814b79       kube-apiserver-default-k8s-diff-port-079165            kube-system
	f08cb369199f4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   72428c8695a8d       kube-controller-manager-default-k8s-diff-port-079165   kube-system
	9eb509b8cbb5d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   00fda6074bfc2       etcd-default-k8s-diff-port-079165                      kube-system
	
	
	==> coredns [b8d4c9ffcedfa2733716688755d46ab1cc30a1030b23f067da3967664b23c7d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55825 - 63081 "HINFO IN 6203087275699617728.7508908622677758774. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015988165s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-079165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-079165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=default-k8s-diff-port-079165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:05:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-079165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:07:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:06:55 +0000   Tue, 16 Dec 2025 03:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-079165
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                67cf8032-f343-4067-841b-e5dc637b7a61
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-xndlx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-079165                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-w5gmn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-079165             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-079165    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-2g6tn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-079165             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rqq6z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s5jhg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-079165 event: Registered Node default-k8s-diff-port-079165 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-079165 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-079165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-079165 event: Registered Node default-k8s-diff-port-079165 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [9eb509b8cbb5d7a44028103cf5f6f28096129184fb10f77e1543e3556c3e9c5f] <==
	{"level":"warn","ts":"2025-12-16T03:06:19.715248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.439583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-16T03:06:19.715255Z","caller":"traceutil/trace.go:172","msg":"trace[768632770] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"363.455218ms","start":"2025-12-16T03:06:19.351781Z","end":"2025-12-16T03:06:19.715237Z","steps":["trace[768632770] 'process raft request'  (duration: 363.197001ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.715285Z","caller":"traceutil/trace.go:172","msg":"trace[166994788] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:558; }","duration":"165.485238ms","start":"2025-12-16T03:06:19.549792Z","end":"2025-12-16T03:06:19.715277Z","steps":["trace[166994788] 'agreement among raft nodes before linearized reading'  (duration: 165.378568ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.715299Z","caller":"traceutil/trace.go:172","msg":"trace[1783366334] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"363.030687ms","start":"2025-12-16T03:06:19.352254Z","end":"2025-12-16T03:06:19.715285Z","steps":["trace[1783366334] 'process raft request'  (duration: 362.843883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:19.715612Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.352241Z","time spent":"363.311683ms","remote":"127.0.0.1:56048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:551 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:4688 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >"}
	{"level":"warn","ts":"2025-12-16T03:06:19.715353Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.351765Z","time spent":"363.528729ms","remote":"127.0.0.1:56048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4918,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:548 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-12-16T03:06:19.715394Z","caller":"traceutil/trace.go:172","msg":"trace[2054746561] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"363.027775ms","start":"2025-12-16T03:06:19.352354Z","end":"2025-12-16T03:06:19.715382Z","steps":["trace[2054746561] 'process raft request'  (duration: 362.775255ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:19.715833Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T03:06:19.352334Z","time spent":"363.437074ms","remote":"127.0.0.1:55514","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4220,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" mod_revision:544 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" value_size:4134 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z\" > >"}
	{"level":"warn","ts":"2025-12-16T03:06:19.715434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.63598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-12-16T03:06:19.716020Z","caller":"traceutil/trace.go:172","msg":"trace[86406049] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:558; }","duration":"166.216163ms","start":"2025-12-16T03:06:19.549792Z","end":"2025-12-16T03:06:19.716008Z","steps":["trace[86406049] 'agreement among raft nodes before linearized reading'  (duration: 165.581475ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:19.966963Z","caller":"traceutil/trace.go:172","msg":"trace[224739754] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"242.173742ms","start":"2025-12-16T03:06:19.724768Z","end":"2025-12-16T03:06:19.966942Z","steps":["trace[224739754] 'process raft request'  (duration: 154.540373ms)","trace[224739754] 'compare'  (duration: 87.50409ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T03:06:25.551884Z","caller":"traceutil/trace.go:172","msg":"trace[1742646181] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"173.20599ms","start":"2025-12-16T03:06:25.378659Z","end":"2025-12-16T03:06:25.551865Z","steps":["trace[1742646181] 'process raft request'  (duration: 173.074465ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.132923Z","caller":"traceutil/trace.go:172","msg":"trace[1011072438] linearizableReadLoop","detail":"{readStateIndex:607; appliedIndex:607; }","duration":"110.425814ms","start":"2025-12-16T03:06:26.022472Z","end":"2025-12-16T03:06:26.132897Z","steps":["trace[1011072438] 'read index received'  (duration: 110.417141ms)","trace[1011072438] 'applied index is now lower than readState.Index'  (duration: 7.103µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.133166Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.676048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-xndlx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-12-16T03:06:26.133264Z","caller":"traceutil/trace.go:172","msg":"trace[1510992810] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-xndlx; range_end:; response_count:1; response_revision:578; }","duration":"110.791232ms","start":"2025-12-16T03:06:26.022461Z","end":"2025-12-16T03:06:26.133253Z","steps":["trace[1510992810] 'agreement among raft nodes before linearized reading'  (duration: 110.563494ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.133542Z","caller":"traceutil/trace.go:172","msg":"trace[1831736857] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"284.585754ms","start":"2025-12-16T03:06:25.848940Z","end":"2025-12-16T03:06:26.133526Z","steps":["trace[1831736857] 'process raft request'  (duration: 284.062898ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:26.296414Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.171811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2025-12-16T03:06:26.296532Z","caller":"traceutil/trace.go:172","msg":"trace[1045722015] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:579; }","duration":"158.304928ms","start":"2025-12-16T03:06:26.138212Z","end":"2025-12-16T03:06:26.296517Z","steps":["trace[1045722015] 'agreement among raft nodes before linearized reading'  (duration: 97.197347ms)","trace[1045722015] 'range keys from in-memory index tree'  (duration: 60.863931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T03:06:26.299036Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.882048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:7994"}
	{"level":"info","ts":"2025-12-16T03:06:26.299102Z","caller":"traceutil/trace.go:172","msg":"trace[983807624] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:579; }","duration":"158.953056ms","start":"2025-12-16T03:06:26.140131Z","end":"2025-12-16T03:06:26.299084Z","steps":["trace[983807624] 'agreement among raft nodes before linearized reading'  (duration: 158.749276ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.431891Z","caller":"traceutil/trace.go:172","msg":"trace[1990496510] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"124.352549ms","start":"2025-12-16T03:06:26.307518Z","end":"2025-12-16T03:06:26.431870Z","steps":["trace[1990496510] 'process raft request'  (duration: 116.100579ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:26.552969Z","caller":"traceutil/trace.go:172","msg":"trace[1455186466] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"107.533199ms","start":"2025-12-16T03:06:26.445417Z","end":"2025-12-16T03:06:26.552950Z","steps":["trace[1455186466] 'process raft request'  (duration: 107.086425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T03:06:27.288985Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.522871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-079165\" limit:1 ","response":"range_response_count:1 size:6167"}
	{"level":"info","ts":"2025-12-16T03:06:27.289964Z","caller":"traceutil/trace.go:172","msg":"trace[1119916504] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-079165; range_end:; response_count:1; response_revision:584; }","duration":"122.505628ms","start":"2025-12-16T03:06:27.167429Z","end":"2025-12-16T03:06:27.289934Z","steps":["trace[1119916504] 'range keys from in-memory index tree'  (duration: 121.439331ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:06:41.349201Z","caller":"traceutil/trace.go:172","msg":"trace[882607615] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"107.397008ms","start":"2025-12-16T03:06:41.241767Z","end":"2025-12-16T03:06:41.349164Z","steps":["trace[882607615] 'process raft request'  (duration: 107.231563ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:07:12 up 49 min,  0 user,  load average: 4.05, 3.28, 2.12
	Linux default-k8s-diff-port-079165 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [670184db3f80433545341b0de34dd360a72b345c9118b0e24ab4a3867cf7efb9] <==
	I1216 03:06:15.889591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:06:15.890018       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 03:06:15.890174       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:06:15.890190       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:06:15.890210       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:06:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:06:16.183285       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:06:16.183325       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:06:16.183337       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:06:16.183528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:06:16.683643       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:06:16.683677       1 metrics.go:72] Registering metrics
	I1216 03:06:16.683738       1 controller.go:711] "Syncing nftables rules"
	I1216 03:06:26.181862       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:26.181932       1 main.go:301] handling current node
	I1216 03:06:36.181980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:36.182018       1 main.go:301] handling current node
	I1216 03:06:46.181300       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:46.181363       1 main.go:301] handling current node
	I1216 03:06:56.181662       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:06:56.181699       1 main.go:301] handling current node
	I1216 03:07:06.181848       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 03:07:06.181884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c44d80f00165272fd0d7f4fe0f600eca4f5945b7fff563472e76e5a5c4b2055] <==
	I1216 03:06:14.773366       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 03:06:14.773295       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:06:14.778623       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:06:14.783224       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:06:14.787808       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:06:14.788332       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:06:14.788409       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:06:14.788456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:06:14.788483       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:06:14.797534       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:06:14.811403       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:06:14.828883       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 03:06:14.828916       1 policy_source.go:240] refreshing policies
	I1216 03:06:14.835540       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:06:15.160665       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:06:15.199678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:06:15.232184       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:06:15.245459       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:06:15.260572       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:06:15.332036       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.206.126"}
	I1216 03:06:15.372078       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.22.42"}
	I1216 03:06:15.673393       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:06:18.518492       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:06:18.647079       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:06:18.836461       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f08cb369199f4afaffd3bcb8c4c8d87f52e397a6343b60c3723942d509b93e09] <==
	I1216 03:06:18.107197       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 03:06:18.107238       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 03:06:18.108462       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 03:06:18.108544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 03:06:18.112715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 03:06:18.112745       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 03:06:18.112846       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:06:18.112855       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 03:06:18.112864       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 03:06:18.112937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:06:18.113139       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 03:06:18.113194       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 03:06:18.116695       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:06:18.119019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:06:18.120075       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 03:06:18.129378       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:06:18.129451       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:06:18.129485       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:06:18.129497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:06:18.129505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:06:18.132069       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:06:18.133305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:06:18.133392       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 03:06:18.136573       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 03:06:18.138859       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [07671a687288ffef99fb4f4809554ea0de160ede89fc4e8bb5a301fe2dd3c604] <==
	I1216 03:06:15.658999       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:06:15.734806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:06:15.835162       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:06:15.835229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 03:06:15.835372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:06:15.869523       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:06:15.869644       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:06:15.877605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:06:15.878208       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:06:15.878261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:15.880417       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:06:15.880450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:06:15.880577       1 config.go:200] "Starting service config controller"
	I1216 03:06:15.880596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:06:15.880637       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:06:15.880650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:06:15.881000       1 config.go:309] "Starting node config controller"
	I1216 03:06:15.881026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:06:15.881034       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:06:15.980977       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:06:15.981032       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:06:15.981073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7f87e3c1123f6a7cdb3d996a27b53d6f22b23b6351b58d02cdb00eb78de8c301] <==
	I1216 03:06:13.297410       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:06:14.711286       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:06:14.711336       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:06:14.711349       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:06:14.711357       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:06:14.765205       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:06:14.765338       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:06:14.768536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:14.768625       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:06:14.769631       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:06:14.769717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:06:14.869299       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:06:23 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:23.626376     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 16 03:06:24 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:24.302303     726 scope.go:117] "RemoveContainer" containerID="3a7b04394a668e79439508be34c2cea0acdbb7a883b2d55dbe79f3a2134ea093"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:25.309675     726 scope.go:117] "RemoveContainer" containerID="3a7b04394a668e79439508be34c2cea0acdbb7a883b2d55dbe79f3a2134ea093"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:25.310065     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:25 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:25.310266     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:26 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:26.313870     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:26 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:26.314082     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:29 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:29.288751     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:29 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:29.289078     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:30 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:30.239380     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s5jhg" podStartSLOduration=3.51342102 podStartE2EDuration="12.239355556s" podCreationTimestamp="2025-12-16 03:06:18 +0000 UTC" firstStartedPulling="2025-12-16 03:06:20.029486343 +0000 UTC m=+7.888367757" lastFinishedPulling="2025-12-16 03:06:28.755420865 +0000 UTC m=+16.614302293" observedRunningTime="2025-12-16 03:06:29.338119841 +0000 UTC m=+17.197001296" watchObservedRunningTime="2025-12-16 03:06:30.239355556 +0000 UTC m=+18.098236992"
	Dec 16 03:06:41 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:41.235647     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:42.365003     726 scope.go:117] "RemoveContainer" containerID="cb9cfb82eda886e3ffae4b683f0057023a977c5ef23bb03d9acc7bd6ab78aa45"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:42.365244     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:06:42 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:42.365464     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:06:46 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:46.378671     726 scope.go:117] "RemoveContainer" containerID="e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa"
	Dec 16 03:06:49 default-k8s-diff-port-079165 kubelet[726]: I1216 03:06:49.289246     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:06:49 default-k8s-diff-port-079165 kubelet[726]: E1216 03:06:49.289499     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.235847     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.427176     726 scope.go:117] "RemoveContainer" containerID="f0e9b05117c29bfa9382f5b3a6b0b3645f5d116b5d822532b1acb620db6e68a3"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: I1216 03:07:04.427409     726 scope.go:117] "RemoveContainer" containerID="9e0a9aaa362179309012a20041579f0b755d87ce1333ff3375a83e0df1c03916"
	Dec 16 03:07:04 default-k8s-diff-port-079165 kubelet[726]: E1216 03:07:04.427620     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rqq6z_kubernetes-dashboard(6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rqq6z" podUID="6ddf774d-1a60-4b4b-aff4-8e4fe4b568a7"
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:07:07 default-k8s-diff-port-079165 systemd[1]: kubelet.service: Consumed 1.901s CPU time.
	
	
	==> kubernetes-dashboard [7b84397dc86262d0b356378c6b12b84c6636937a33524732bdbe7c871c61d178] <==
	2025/12/16 03:06:28 Using namespace: kubernetes-dashboard
	2025/12/16 03:06:28 Using in-cluster config to connect to apiserver
	2025/12/16 03:06:28 Using secret token for csrf signing
	2025/12/16 03:06:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:06:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:06:28 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 03:06:28 Generating JWE encryption key
	2025/12/16 03:06:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:06:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:06:28 Initializing JWE encryption key from synchronized object
	2025/12/16 03:06:28 Creating in-cluster Sidecar client
	2025/12/16 03:06:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:06:28 Serving insecurely on HTTP port: 9090
	2025/12/16 03:06:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:06:28 Starting overwatch
	
	
	==> storage-provisioner [6eebae3db0a17b24bb74d784f6a8b1568b4949476ffcabd97ad0f659fe7fc1f9] <==
	I1216 03:06:46.445910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:06:46.453363       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:06:46.453403       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:06:46.455602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:49.911091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:54.171920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:06:57.770775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:00.825686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.848610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.852873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:07:03.853055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:07:03.853210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5!
	I1216 03:07:03.853203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41786d2c-b62a-4752-9d3d-2698b61108be", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5 became leader
	W1216 03:07:03.855161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:03.859726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:07:03.953394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-079165_ab42cbcb-a8c1-40ad-a130-dc2cd0a0ded5!
	W1216 03:07:05.863134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:05.866637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:07.870465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:07.874419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:09.878080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:09.885150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:11.888387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:07:11.893102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e2bb736213932a0d574f81d4a2d81923f2f64d896b3105968cedde5b8c02bafa] <==
	I1216 03:06:15.624386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:06:45.629260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165: exit status 2 (325.467635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-742794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-742794 --alsologtostderr -v=1: exit status 80 (2.368546555s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-742794 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:08:24.226638  341230 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:08:24.226939  341230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:08:24.226952  341230 out.go:374] Setting ErrFile to fd 2...
	I1216 03:08:24.226959  341230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:08:24.227231  341230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:08:24.227503  341230 out.go:368] Setting JSON to false
	I1216 03:08:24.227521  341230 mustload.go:66] Loading cluster: embed-certs-742794
	I1216 03:08:24.227948  341230 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:24.228356  341230 cli_runner.go:164] Run: docker container inspect embed-certs-742794 --format={{.State.Status}}
	I1216 03:08:24.247638  341230 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:08:24.247932  341230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:08:24.305633  341230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-16 03:08:24.29535959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:08:24.306281  341230 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-742794 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 03:08:24.308086  341230 out.go:179] * Pausing node embed-certs-742794 ... 
	I1216 03:08:24.309195  341230 host.go:66] Checking if "embed-certs-742794" exists ...
	I1216 03:08:24.309458  341230 ssh_runner.go:195] Run: systemctl --version
	I1216 03:08:24.309503  341230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-742794
	I1216 03:08:24.328871  341230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/embed-certs-742794/id_rsa Username:docker}
	I1216 03:08:24.425572  341230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:24.438622  341230 pause.go:52] kubelet running: true
	I1216 03:08:24.438689  341230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:08:24.631641  341230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:08:24.631746  341230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:08:24.700437  341230 cri.go:89] found id: "d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68"
	I1216 03:08:24.700460  341230 cri.go:89] found id: "42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559"
	I1216 03:08:24.700466  341230 cri.go:89] found id: "9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639"
	I1216 03:08:24.700471  341230 cri.go:89] found id: "ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02"
	I1216 03:08:24.700476  341230 cri.go:89] found id: "7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	I1216 03:08:24.700481  341230 cri.go:89] found id: "cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291"
	I1216 03:08:24.700485  341230 cri.go:89] found id: "a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17"
	I1216 03:08:24.700491  341230 cri.go:89] found id: "667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf"
	I1216 03:08:24.700494  341230 cri.go:89] found id: "81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823"
	I1216 03:08:24.700506  341230 cri.go:89] found id: "0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	I1216 03:08:24.700514  341230 cri.go:89] found id: "424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17"
	I1216 03:08:24.700519  341230 cri.go:89] found id: ""
	I1216 03:08:24.700575  341230 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:08:24.712392  341230 retry.go:31] will retry after 137.150628ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:08:24Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:08:24.849694  341230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:24.862781  341230 pause.go:52] kubelet running: false
	I1216 03:08:24.862851  341230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:08:25.005857  341230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:08:25.005933  341230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:08:25.077469  341230 cri.go:89] found id: "d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68"
	I1216 03:08:25.077486  341230 cri.go:89] found id: "42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559"
	I1216 03:08:25.077490  341230 cri.go:89] found id: "9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639"
	I1216 03:08:25.077493  341230 cri.go:89] found id: "ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02"
	I1216 03:08:25.077501  341230 cri.go:89] found id: "7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	I1216 03:08:25.077505  341230 cri.go:89] found id: "cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291"
	I1216 03:08:25.077507  341230 cri.go:89] found id: "a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17"
	I1216 03:08:25.077510  341230 cri.go:89] found id: "667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf"
	I1216 03:08:25.077513  341230 cri.go:89] found id: "81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823"
	I1216 03:08:25.077518  341230 cri.go:89] found id: "0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	I1216 03:08:25.077521  341230 cri.go:89] found id: "424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17"
	I1216 03:08:25.077524  341230 cri.go:89] found id: ""
	I1216 03:08:25.077558  341230 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:08:25.089066  341230 retry.go:31] will retry after 468.136655ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:08:25Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:08:25.557668  341230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:25.570437  341230 pause.go:52] kubelet running: false
	I1216 03:08:25.570488  341230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:08:25.711142  341230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:08:25.711249  341230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:08:25.781163  341230 cri.go:89] found id: "d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68"
	I1216 03:08:25.781190  341230 cri.go:89] found id: "42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559"
	I1216 03:08:25.781196  341230 cri.go:89] found id: "9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639"
	I1216 03:08:25.781201  341230 cri.go:89] found id: "ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02"
	I1216 03:08:25.781206  341230 cri.go:89] found id: "7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	I1216 03:08:25.781211  341230 cri.go:89] found id: "cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291"
	I1216 03:08:25.781214  341230 cri.go:89] found id: "a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17"
	I1216 03:08:25.781217  341230 cri.go:89] found id: "667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf"
	I1216 03:08:25.781219  341230 cri.go:89] found id: "81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823"
	I1216 03:08:25.781242  341230 cri.go:89] found id: "0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	I1216 03:08:25.781252  341230 cri.go:89] found id: "424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17"
	I1216 03:08:25.781257  341230 cri.go:89] found id: ""
	I1216 03:08:25.781312  341230 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:08:25.793083  341230 retry.go:31] will retry after 458.896948ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:08:25Z" level=error msg="open /run/runc: no such file or directory"
	I1216 03:08:26.252445  341230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:26.276438  341230 pause.go:52] kubelet running: false
	I1216 03:08:26.276500  341230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 03:08:26.423705  341230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 03:08:26.423804  341230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 03:08:26.508266  341230 cri.go:89] found id: "d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68"
	I1216 03:08:26.508308  341230 cri.go:89] found id: "42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559"
	I1216 03:08:26.508314  341230 cri.go:89] found id: "9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639"
	I1216 03:08:26.508319  341230 cri.go:89] found id: "ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02"
	I1216 03:08:26.508324  341230 cri.go:89] found id: "7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	I1216 03:08:26.508328  341230 cri.go:89] found id: "cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291"
	I1216 03:08:26.508333  341230 cri.go:89] found id: "a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17"
	I1216 03:08:26.508337  341230 cri.go:89] found id: "667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf"
	I1216 03:08:26.508341  341230 cri.go:89] found id: "81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823"
	I1216 03:08:26.508349  341230 cri.go:89] found id: "0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	I1216 03:08:26.508354  341230 cri.go:89] found id: "424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17"
	I1216 03:08:26.508358  341230 cri.go:89] found id: ""
	I1216 03:08:26.508411  341230 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 03:08:26.523881  341230 out.go:203] 
	W1216 03:08:26.525395  341230 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:08:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:08:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 03:08:26.525419  341230 out.go:285] * 
	* 
	W1216 03:08:26.530585  341230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:08:26.531775  341230 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-742794 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-742794
helpers_test.go:244: (dbg) docker inspect embed-certs-742794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	        "Created": "2025-12-16T03:06:20.456549573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324847,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:07:25.835474606Z",
	            "FinishedAt": "2025-12-16T03:07:24.883717353Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hosts",
	        "LogPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3-json.log",
	        "Name": "/embed-certs-742794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-742794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-742794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	                "LowerDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-742794",
	                "Source": "/var/lib/docker/volumes/embed-certs-742794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-742794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-742794",
	                "name.minikube.sigs.k8s.io": "embed-certs-742794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7669ce21782551ed65166fd9b66c65b66dd6a81497eca740f379a908267d1f5b",
	            "SandboxKey": "/var/run/docker/netns/7669ce217825",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-742794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "698574664c58f66fc30ac38bce099a4a38e50897a8947172848cad9a06889288",
	                    "EndpointID": "082efb495964d8232a5d69a037e1a49a457ce1fdf300076b32c70484139f1961",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "06:41:33:99:82:c7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-742794",
	                        "913c75f545a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794: exit status 2 (367.835292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25: (1.475609282s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-646016 sudo systemctl status docker --all --full --no-pager                                                                                          │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat docker --no-pager                                                                                                          │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /etc/docker/daemon.json                                                                                                              │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo docker system info                                                                                                                       │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl status cri-docker --all --full --no-pager                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat cri-docker --no-pager                                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                 │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                           │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cri-dockerd --version                                                                                                                    │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl status containerd --all --full --no-pager                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat containerd --no-pager                                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /lib/systemd/system/containerd.service                                                                                               │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /etc/containerd/config.toml                                                                                                          │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo containerd config dump                                                                                                                   │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl status crio --all --full --no-pager                                                                                            │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat crio --no-pager                                                                                                            │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo crio config                                                                                                                              │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ delete  │ -p kindnet-646016                                                                                                                                               │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ start   │ -p enable-default-cni-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-646016 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p calico-646016 pgrep -a kubelet                                                                                                                               │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p custom-flannel-646016 pgrep -a kubelet                                                                                                                       │ custom-flannel-646016     │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ image   │ embed-certs-742794 image list --format=json                                                                                                                     │ embed-certs-742794        │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ pause   │ -p embed-certs-742794 --alsologtostderr -v=1                                                                                                                    │ embed-certs-742794        │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ ssh     │ -p calico-646016 sudo cat /etc/nsswitch.conf                                                                                                                    │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:07:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:07:58.870309  336341 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:07:58.870563  336341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:58.870571  336341 out.go:374] Setting ErrFile to fd 2...
	I1216 03:07:58.870575  336341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:58.870753  336341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:07:58.871237  336341 out.go:368] Setting JSON to false
	I1216 03:07:58.872385  336341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3031,"bootTime":1765851448,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:07:58.872438  336341 start.go:143] virtualization: kvm guest
	I1216 03:07:58.874201  336341 out.go:179] * [enable-default-cni-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:07:58.875627  336341 notify.go:221] Checking for updates...
	I1216 03:07:58.875649  336341 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:07:58.876920  336341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:07:58.878010  336341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:07:58.879034  336341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:07:58.880178  336341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:07:58.881299  336341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:07:58.882829  336341 config.go:182] Loaded profile config "calico-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.882939  336341 config.go:182] Loaded profile config "custom-flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.883018  336341 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.883149  336341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:07:58.908129  336341 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:07:58.908298  336341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:07:58.966023  336341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:07:58.956109368 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:07:58.966170  336341 docker.go:319] overlay module found
	I1216 03:07:58.968494  336341 out.go:179] * Using the docker driver based on user configuration
	I1216 03:07:58.969637  336341 start.go:309] selected driver: docker
	I1216 03:07:58.969652  336341 start.go:927] validating driver "docker" against <nil>
	I1216 03:07:58.969663  336341 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:07:58.970241  336341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:07:59.026396  336341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:07:59.016299896 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:07:59.026582  336341 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1216 03:07:59.026769  336341 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1216 03:07:59.026791  336341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:07:59.028294  336341 out.go:179] * Using Docker driver with root privileges
	I1216 03:07:59.029566  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:07:59.029585  336341 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:07:59.029655  336341 start.go:353] cluster config:
	{Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:07:59.030954  336341 out.go:179] * Starting "enable-default-cni-646016" primary control-plane node in "enable-default-cni-646016" cluster
	I1216 03:07:59.032126  336341 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:07:59.033278  336341 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:07:59.034348  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:07:59.034383  336341 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:07:59.034392  336341 cache.go:65] Caching tarball of preloaded images
	I1216 03:07:59.034455  336341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:07:59.034496  336341 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:07:59.034506  336341 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:07:59.034588  336341 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json ...
	I1216 03:07:59.034609  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json: {Name:mk6c27771f22d38d86886e3d238898d3e2df8a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:59.055959  336341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:07:59.055982  336341 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:07:59.056002  336341 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:07:59.056038  336341 start.go:360] acquireMachinesLock for enable-default-cni-646016: {Name:mkf063c9177dae20297f3317db3407e754fd69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:07:59.056153  336341 start.go:364] duration metric: took 93.846µs to acquireMachinesLock for "enable-default-cni-646016"
	I1216 03:07:59.056182  336341 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:07:59.056276  336341 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:07:56.133147  327289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:07:56.133202  327289 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1216 03:07:56.138499  327289 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1216 03:07:56.138527  327289 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1216 03:07:56.162750  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:07:56.574716  327289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:07:56.574807  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:56.574957  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-646016 minikube.k8s.io/updated_at=2025_12_16T03_07_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=custom-flannel-646016 minikube.k8s.io/primary=true
	I1216 03:07:56.666788  327289 ops.go:34] apiserver oom_adj: -16
	I1216 03:07:56.666933  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:57.167157  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:57.667265  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:58.168030  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:58.668058  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:59.167194  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:59.667518  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:00.167850  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1216 03:07:56.457839  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:07:58.955801  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:00.667299  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:00.758680  327289 kubeadm.go:1114] duration metric: took 4.183927417s to wait for elevateKubeSystemPrivileges
	I1216 03:08:00.758717  327289 kubeadm.go:403] duration metric: took 18.963447707s to StartCluster
	I1216 03:08:00.758740  327289 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:00.758809  327289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:08:00.761309  327289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:00.761599  327289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:08:00.761619  327289 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:08:00.761934  327289 config.go:182] Loaded profile config "custom-flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:00.761995  327289 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:08:00.762066  327289 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-646016"
	I1216 03:08:00.762085  327289 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-646016"
	I1216 03:08:00.762122  327289 host.go:66] Checking if "custom-flannel-646016" exists ...
	I1216 03:08:00.762192  327289 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-646016"
	I1216 03:08:00.762213  327289 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-646016"
	I1216 03:08:00.762559  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.762647  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.764242  327289 out.go:179] * Verifying Kubernetes components...
	I1216 03:08:00.765850  327289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:00.791792  327289 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:08:00.793804  327289 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:00.793918  327289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:08:00.793983  327289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646016
	I1216 03:08:00.794638  327289 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-646016"
	I1216 03:08:00.794717  327289 host.go:66] Checking if "custom-flannel-646016" exists ...
	I1216 03:08:00.795259  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.829131  327289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/custom-flannel-646016/id_rsa Username:docker}
	I1216 03:08:00.831644  327289 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:00.831664  327289 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:08:00.831849  327289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646016
	I1216 03:08:00.859274  327289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/custom-flannel-646016/id_rsa Username:docker}
	I1216 03:08:00.884477  327289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:08:00.949486  327289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:08:00.958236  327289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:00.981285  327289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:01.088041  327289 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:08:01.089603  327289 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-646016" to be "Ready" ...
	I1216 03:08:01.335208  327289 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:07:57.351096  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:07:57.351139  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:07:57.351151  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:07:57.351162  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:07:57.351171  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:07:57.351179  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:07:57.351188  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:07:57.351195  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:07:57.351203  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:07:57.351210  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:07:57.351231  320124 retry.go:31] will retry after 1.419549421s: missing components: kube-dns
	I1216 03:07:58.775386  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:07:58.775423  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:07:58.775436  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:07:58.775445  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:07:58.775451  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:07:58.775459  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:07:58.775466  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:07:58.775475  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:07:58.775481  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:07:58.775490  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:07:58.775507  320124 retry.go:31] will retry after 2.893963506s: missing components: kube-dns
	I1216 03:07:59.058511  336341 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:07:59.058785  336341 start.go:159] libmachine.API.Create for "enable-default-cni-646016" (driver="docker")
	I1216 03:07:59.058848  336341 client.go:173] LocalClient.Create starting
	I1216 03:07:59.058923  336341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:07:59.058972  336341 main.go:143] libmachine: Decoding PEM data...
	I1216 03:07:59.059030  336341 main.go:143] libmachine: Parsing certificate...
	I1216 03:07:59.059094  336341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:07:59.059134  336341 main.go:143] libmachine: Decoding PEM data...
	I1216 03:07:59.059153  336341 main.go:143] libmachine: Parsing certificate...
	I1216 03:07:59.059504  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:07:59.076671  336341 cli_runner.go:211] docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:07:59.076763  336341 network_create.go:284] running [docker network inspect enable-default-cni-646016] to gather additional debugging logs...
	I1216 03:07:59.076789  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016
	W1216 03:07:59.094113  336341 cli_runner.go:211] docker network inspect enable-default-cni-646016 returned with exit code 1
	I1216 03:07:59.094139  336341 network_create.go:287] error running [docker network inspect enable-default-cni-646016]: docker network inspect enable-default-cni-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-646016 not found
	I1216 03:07:59.094165  336341 network_create.go:289] output of [docker network inspect enable-default-cni-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-646016 not found
	
	** /stderr **
	I1216 03:07:59.094250  336341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:07:59.114027  336341 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:07:59.114674  336341 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:07:59.115393  336341 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:07:59.116210  336341 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e399e0}
	I1216 03:07:59.116232  336341 network_create.go:124] attempt to create docker network enable-default-cni-646016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:07:59.116303  336341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-646016 enable-default-cni-646016
	I1216 03:07:59.165940  336341 network_create.go:108] docker network enable-default-cni-646016 192.168.76.0/24 created
	I1216 03:07:59.165976  336341 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-646016" container
	I1216 03:07:59.166041  336341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:07:59.187672  336341 cli_runner.go:164] Run: docker volume create enable-default-cni-646016 --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:07:59.208837  336341 oci.go:103] Successfully created a docker volume enable-default-cni-646016
	I1216 03:07:59.208935  336341 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --entrypoint /usr/bin/test -v enable-default-cni-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:07:59.848961  336341 oci.go:107] Successfully prepared a docker volume enable-default-cni-646016
	I1216 03:07:59.849035  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:07:59.849049  336341 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:07:59.849132  336341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:08:01.336652  327289 addons.go:530] duration metric: took 574.654324ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:08:01.592793  327289 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-646016" context rescaled to 1 replicas
	W1216 03:08:03.093147  327289 node_ready.go:57] node "custom-flannel-646016" has "Ready":"False" status (will retry)
	W1216 03:08:05.100148  327289 node_ready.go:57] node "custom-flannel-646016" has "Ready":"False" status (will retry)
	W1216 03:08:00.959607  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:03.456272  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:05.457380  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:01.674959  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:08:01.674995  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:08:01.675006  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:08:01.675018  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:01.675025  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:08:01.675031  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:08:01.675036  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:08:01.675042  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:08:01.675054  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:08:01.675059  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:08:01.675080  320124 retry.go:31] will retry after 2.852841152s: missing components: kube-dns
	I1216 03:08:04.537431  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:08:04.537640  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:08:04.537656  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:08:04.537665  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Running
	I1216 03:08:04.537682  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:08:04.537691  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:08:04.537697  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:08:04.537706  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:08:04.537714  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:08:04.537719  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:08:04.537729  320124 system_pods.go:126] duration metric: took 13.480139087s to wait for k8s-apps to be running ...
	I1216 03:08:04.537739  320124 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:08:04.537790  320124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:04.560920  320124 system_svc.go:56] duration metric: took 23.161939ms WaitForService to wait for kubelet
	I1216 03:08:04.561035  320124 kubeadm.go:587] duration metric: took 19.524702292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:08:04.561063  320124 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:08:04.565631  320124 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:08:04.565654  320124 node_conditions.go:123] node cpu capacity is 8
	I1216 03:08:04.565672  320124 node_conditions.go:105] duration metric: took 4.60317ms to run NodePressure ...
	I1216 03:08:04.565683  320124 start.go:242] waiting for startup goroutines ...
	I1216 03:08:04.565690  320124 start.go:247] waiting for cluster config update ...
	I1216 03:08:04.565701  320124 start.go:256] writing updated cluster config ...
	I1216 03:08:04.565967  320124 ssh_runner.go:195] Run: rm -f paused
	I1216 03:08:04.572305  320124 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:04.577689  320124 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dvcwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.584215  320124 pod_ready.go:94] pod "coredns-66bc5c9577-dvcwp" is "Ready"
	I1216 03:08:04.584244  320124 pod_ready.go:86] duration metric: took 6.531144ms for pod "coredns-66bc5c9577-dvcwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.587287  320124 pod_ready.go:83] waiting for pod "etcd-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.592761  320124 pod_ready.go:94] pod "etcd-calico-646016" is "Ready"
	I1216 03:08:04.592783  320124 pod_ready.go:86] duration metric: took 5.472316ms for pod "etcd-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.595293  320124 pod_ready.go:83] waiting for pod "kube-apiserver-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.602038  320124 pod_ready.go:94] pod "kube-apiserver-calico-646016" is "Ready"
	I1216 03:08:04.602066  320124 pod_ready.go:86] duration metric: took 6.751105ms for pod "kube-apiserver-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.604634  320124 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.979875  320124 pod_ready.go:94] pod "kube-controller-manager-calico-646016" is "Ready"
	I1216 03:08:04.979916  320124 pod_ready.go:86] duration metric: took 375.256086ms for pod "kube-controller-manager-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.178655  320124 pod_ready.go:83] waiting for pod "kube-proxy-ztq2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.578507  320124 pod_ready.go:94] pod "kube-proxy-ztq2k" is "Ready"
	I1216 03:08:05.578604  320124 pod_ready.go:86] duration metric: took 399.91786ms for pod "kube-proxy-ztq2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.778127  320124 pod_ready.go:83] waiting for pod "kube-scheduler-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:06.179400  320124 pod_ready.go:94] pod "kube-scheduler-calico-646016" is "Ready"
	I1216 03:08:06.179431  320124 pod_ready.go:86] duration metric: took 401.270254ms for pod "kube-scheduler-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:06.179445  320124 pod_ready.go:40] duration metric: took 1.607109772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:06.239728  320124 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:06.241655  320124 out.go:179] * Done! kubectl is now configured to use "calico-646016" cluster and "default" namespace by default
	I1216 03:08:04.240161  336341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.390971755s)
	I1216 03:08:04.240198  336341 kic.go:203] duration metric: took 4.391146443s to extract preloaded images to volume ...
	W1216 03:08:04.240290  336341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:08:04.240327  336341 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:08:04.240378  336341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:08:04.347019  336341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-646016 --name enable-default-cni-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-646016 --network enable-default-cni-646016 --ip 192.168.76.2 --volume enable-default-cni-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:08:04.725399  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Running}}
	I1216 03:08:04.747429  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:04.767774  336341 cli_runner.go:164] Run: docker exec enable-default-cni-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:08:04.820679  336341 oci.go:144] the created container "enable-default-cni-646016" has a running status.
	I1216 03:08:04.820717  336341 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa...
	I1216 03:08:04.902133  336341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:08:04.940385  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:04.965564  336341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:08:04.965587  336341 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:08:05.045035  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:05.071230  336341 machine.go:94] provisionDockerMachine start ...
	I1216 03:08:05.071450  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:05.101380  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:05.101780  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:05.101796  336341 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:08:05.102645  336341 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 03:08:08.244459  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646016
	
	I1216 03:08:08.244492  336341 ubuntu.go:182] provisioning hostname "enable-default-cni-646016"
	I1216 03:08:08.244559  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.263354  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.263646  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.263672  336341 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-646016 && echo "enable-default-cni-646016" | sudo tee /etc/hostname
	I1216 03:08:08.411933  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646016
	
	I1216 03:08:08.412014  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.431195  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.431431  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.431450  336341 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:08:08.568475  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:08:08.568503  336341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:08:08.568522  336341 ubuntu.go:190] setting up certificates
	I1216 03:08:08.568531  336341 provision.go:84] configureAuth start
	I1216 03:08:08.568579  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:08.586777  336341 provision.go:143] copyHostCerts
	I1216 03:08:08.586879  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:08:08.586901  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:08:08.586989  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:08:08.587100  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:08:08.587112  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:08:08.587153  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:08:08.587233  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:08:08.587243  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:08:08.587281  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:08:08.587348  336341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-646016 san=[127.0.0.1 192.168.76.2 enable-default-cni-646016 localhost minikube]
	I1216 03:08:08.686898  336341 provision.go:177] copyRemoteCerts
	I1216 03:08:08.686955  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:08:08.686998  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.706623  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:08.806406  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 03:08:08.827183  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:08:08.846746  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:08:08.865507  336341 provision.go:87] duration metric: took 296.955017ms to configureAuth
	I1216 03:08:08.865539  336341 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:08:08.865719  336341 config.go:182] Loaded profile config "enable-default-cni-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:08.865849  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:06.592923  327289 node_ready.go:49] node "custom-flannel-646016" is "Ready"
	I1216 03:08:06.592950  327289 node_ready.go:38] duration metric: took 5.50331992s for node "custom-flannel-646016" to be "Ready" ...
	I1216 03:08:06.592963  327289 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:08:06.593019  327289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:08:06.604775  327289 api_server.go:72] duration metric: took 5.843121218s to wait for apiserver process to appear ...
	I1216 03:08:06.604800  327289 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:08:06.604827  327289 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:08:06.609304  327289 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:08:06.610288  327289 api_server.go:141] control plane version: v1.34.2
	I1216 03:08:06.610311  327289 api_server.go:131] duration metric: took 5.505683ms to wait for apiserver health ...
	I1216 03:08:06.610319  327289 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:08:06.614029  327289 system_pods.go:59] 7 kube-system pods found
	I1216 03:08:06.614069  327289 system_pods.go:61] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.614080  327289 system_pods.go:61] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.614089  327289 system_pods.go:61] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.614101  327289 system_pods.go:61] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.614111  327289 system_pods.go:61] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.614116  327289 system_pods.go:61] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.614126  327289 system_pods.go:61] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.614133  327289 system_pods.go:74] duration metric: took 3.807528ms to wait for pod list to return data ...
	I1216 03:08:06.614150  327289 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:08:06.616562  327289 default_sa.go:45] found service account: "default"
	I1216 03:08:06.616581  327289 default_sa.go:55] duration metric: took 2.421799ms for default service account to be created ...
	I1216 03:08:06.616590  327289 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:08:06.619513  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:06.619544  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.619553  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.619575  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.619583  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.619589  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.619595  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.619623  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.619650  327289 retry.go:31] will retry after 188.571069ms: missing components: kube-dns
	I1216 03:08:06.812790  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:06.812853  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.812860  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.812866  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.812872  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.812877  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.812882  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.812889  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.812906  327289 retry.go:31] will retry after 269.474978ms: missing components: kube-dns
	I1216 03:08:07.086721  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.086753  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.086761  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:07.086767  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.086771  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.086775  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.086778  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.086783  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:07.086796  327289 retry.go:31] will retry after 345.183644ms: missing components: kube-dns
	I1216 03:08:07.436048  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.436085  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.436093  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:07.436102  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.436109  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.436114  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.436120  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.436133  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:07.436150  327289 retry.go:31] will retry after 402.382971ms: missing components: kube-dns
	I1216 03:08:07.843560  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.843589  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.843595  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:07.843607  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.843614  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.843619  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.843625  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.843630  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:07.843647  327289 retry.go:31] will retry after 495.107547ms: missing components: kube-dns
	I1216 03:08:08.342503  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:08.342538  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:08.342543  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:08.342550  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:08.342554  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:08.342558  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:08.342561  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:08.342564  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:08.342578  327289 retry.go:31] will retry after 764.298983ms: missing components: kube-dns
	I1216 03:08:09.111900  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:09.111930  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:09.111936  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:09.111943  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:09.111947  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:09.111952  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:09.111955  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:09.111959  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:09.111974  327289 retry.go:31] will retry after 870.947057ms: missing components: kube-dns
	I1216 03:08:09.987279  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:09.987313  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:09.987318  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:09.987324  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:09.987332  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:09.987336  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:09.987339  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:09.987342  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:09.987356  327289 retry.go:31] will retry after 1.127635162s: missing components: kube-dns
	W1216 03:08:07.955704  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:09.955960  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:08.885244  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.885481  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.885499  336341 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:08:09.176304  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:08:09.176340  336341 machine.go:97] duration metric: took 4.10508772s to provisionDockerMachine
	I1216 03:08:09.176352  336341 client.go:176] duration metric: took 10.117495893s to LocalClient.Create
	I1216 03:08:09.176372  336341 start.go:167] duration metric: took 10.117588215s to libmachine.API.Create "enable-default-cni-646016"
	I1216 03:08:09.176381  336341 start.go:293] postStartSetup for "enable-default-cni-646016" (driver="docker")
	I1216 03:08:09.176397  336341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:08:09.176485  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:08:09.176543  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.196952  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.298623  336341 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:08:09.302002  336341 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:08:09.302034  336341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:08:09.302046  336341 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:08:09.302113  336341 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:08:09.302207  336341 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:08:09.302306  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:08:09.309995  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:08:09.330598  336341 start.go:296] duration metric: took 154.200303ms for postStartSetup
	I1216 03:08:09.331015  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:09.348953  336341 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json ...
	I1216 03:08:09.349224  336341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:08:09.349283  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.368047  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.463918  336341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:08:09.468547  336341 start.go:128] duration metric: took 10.412255965s to createHost
	I1216 03:08:09.468574  336341 start.go:83] releasing machines lock for "enable-default-cni-646016", held for 10.412407201s
	I1216 03:08:09.468629  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:09.489776  336341 ssh_runner.go:195] Run: cat /version.json
	I1216 03:08:09.489844  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.489872  336341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:08:09.489950  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.508436  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.508998  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.665209  336341 ssh_runner.go:195] Run: systemctl --version
	I1216 03:08:09.671669  336341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:08:09.706599  336341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:08:09.711670  336341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:08:09.711722  336341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:08:09.739312  336341 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:08:09.739337  336341 start.go:496] detecting cgroup driver to use...
	I1216 03:08:09.739373  336341 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:08:09.739423  336341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:08:09.755455  336341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:08:09.768005  336341 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:08:09.768055  336341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:08:09.785613  336341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:08:09.802861  336341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:08:09.887759  336341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:08:09.974398  336341 docker.go:234] disabling docker service ...
	I1216 03:08:09.974466  336341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:08:09.994022  336341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:08:10.006915  336341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:08:10.094973  336341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:08:10.178913  336341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:08:10.192767  336341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:08:10.207669  336341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:08:10.207722  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.218263  336341 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:08:10.218331  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.227302  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.236185  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.244927  336341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:08:10.252937  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.261710  336341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.275363  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.284517  336341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:08:10.291984  336341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:08:10.299171  336341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:10.378486  336341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:08:10.843377  336341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:08:10.843442  336341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:08:10.847520  336341 start.go:564] Will wait 60s for crictl version
	I1216 03:08:10.847570  336341 ssh_runner.go:195] Run: which crictl
	I1216 03:08:10.851360  336341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:08:10.874737  336341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:08:10.874804  336341 ssh_runner.go:195] Run: crio --version
	I1216 03:08:10.903749  336341 ssh_runner.go:195] Run: crio --version
	I1216 03:08:10.935080  336341 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:08:10.958125  324480 pod_ready.go:94] pod "coredns-66bc5c9577-rz62v" is "Ready"
	I1216 03:08:10.958169  324480 pod_ready.go:86] duration metric: took 32.008242289s for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.960757  324480 pod_ready.go:83] waiting for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.965072  324480 pod_ready.go:94] pod "etcd-embed-certs-742794" is "Ready"
	I1216 03:08:10.965089  324480 pod_ready.go:86] duration metric: took 4.308418ms for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.967233  324480 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.971011  324480 pod_ready.go:94] pod "kube-apiserver-embed-certs-742794" is "Ready"
	I1216 03:08:10.971030  324480 pod_ready.go:86] duration metric: took 3.781503ms for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.972914  324480 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.154723  324480 pod_ready.go:94] pod "kube-controller-manager-embed-certs-742794" is "Ready"
	I1216 03:08:11.154754  324480 pod_ready.go:86] duration metric: took 181.818907ms for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.354049  324480 pod_ready.go:83] waiting for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.754568  324480 pod_ready.go:94] pod "kube-proxy-899tv" is "Ready"
	I1216 03:08:11.754598  324480 pod_ready.go:86] duration metric: took 400.525561ms for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.954663  324480 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:12.354668  324480 pod_ready.go:94] pod "kube-scheduler-embed-certs-742794" is "Ready"
	I1216 03:08:12.354696  324480 pod_ready.go:86] duration metric: took 400.010965ms for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:12.354711  324480 pod_ready.go:40] duration metric: took 33.408689402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:12.412771  324480 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:12.415012  324480 out.go:179] * Done! kubectl is now configured to use "embed-certs-742794" cluster and "default" namespace by default
	I1216 03:08:10.936405  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:08:10.954836  336341 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:08:10.959680  336341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:08:10.971857  336341 kubeadm.go:884] updating cluster {Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:08:10.972006  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:08:10.972081  336341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:08:11.004114  336341 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:08:11.004135  336341 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:08:11.004181  336341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:08:11.029519  336341 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:08:11.029539  336341 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:08:11.029545  336341 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 03:08:11.029628  336341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1216 03:08:11.029698  336341 ssh_runner.go:195] Run: crio config
	I1216 03:08:11.074796  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:08:11.074837  336341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:08:11.074868  336341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-646016 NodeName:enable-default-cni-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:08:11.075006  336341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:08:11.075072  336341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:08:11.083266  336341 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:08:11.083327  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:08:11.091248  336341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 03:08:11.103827  336341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:08:11.119310  336341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 03:08:11.132947  336341 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:08:11.136635  336341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:08:11.146487  336341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:11.231049  336341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:08:11.259536  336341 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016 for IP: 192.168.76.2
	I1216 03:08:11.259558  336341 certs.go:195] generating shared ca certs ...
	I1216 03:08:11.259577  336341 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.259768  336341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:08:11.259856  336341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:08:11.259872  336341 certs.go:257] generating profile certs ...
	I1216 03:08:11.259945  336341 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key
	I1216 03:08:11.259968  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt with IP's: []
	I1216 03:08:11.493687  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt ...
	I1216 03:08:11.493712  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt: {Name:mk36be957c3f6e4e308d0508e4b59467834da1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.493932  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key ...
	I1216 03:08:11.493951  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key: {Name:mk7444d2f7692f28304ae915f9b55e9c99798a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.494069  336341 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64
	I1216 03:08:11.494086  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:08:11.724186  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 ...
	I1216 03:08:11.724213  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64: {Name:mkc2aac88a53da8bf33d1e25029c82dc6fc0e58d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.724400  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64 ...
	I1216 03:08:11.724419  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64: {Name:mk9a85bf29933f41d39d06fba90d821fb048dd68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.724522  336341 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt
	I1216 03:08:11.724631  336341 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key
	I1216 03:08:11.724718  336341 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key
	I1216 03:08:11.724740  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt with IP's: []
	I1216 03:08:11.789665  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt ...
	I1216 03:08:11.789693  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt: {Name:mk0e84e0c50b14eb4bec375c93d4765472f16aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.789897  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key ...
	I1216 03:08:11.789918  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key: {Name:mk95393ee9755a4c21c03c2b4a0362d3fd9f2978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.790146  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:08:11.790203  336341 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:08:11.790219  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:08:11.790263  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:08:11.790308  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:08:11.790345  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:08:11.790404  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:08:11.791078  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:08:11.810513  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:08:11.828808  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:08:11.847636  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:08:11.865772  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 03:08:11.883895  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:08:11.901142  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:08:11.918507  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:08:11.935882  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:08:11.955785  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:08:11.973053  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:08:11.990707  336341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:08:12.003403  336341 ssh_runner.go:195] Run: openssl version
	I1216 03:08:12.009393  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.016909  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:08:12.024412  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.028129  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.028171  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.063186  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:08:12.071109  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:08:12.078722  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.086405  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:08:12.094612  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.098672  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.098741  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.137744  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:08:12.145534  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:08:12.153311  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.161455  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:08:12.168746  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.172545  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.172596  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.210569  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:08:12.219185  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:08:12.227595  336341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:08:12.231493  336341 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:08:12.231568  336341 kubeadm.go:401] StartCluster: {Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:08:12.231662  336341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:08:12.231737  336341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:08:12.260343  336341 cri.go:89] found id: ""
	I1216 03:08:12.260407  336341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:08:12.268991  336341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:08:12.277890  336341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:08:12.277967  336341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:08:12.287125  336341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:08:12.287146  336341 kubeadm.go:158] found existing configuration files:
	
	I1216 03:08:12.287211  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:08:12.295341  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:08:12.295409  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:08:12.302841  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:08:12.311074  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:08:12.311143  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:08:12.318508  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:08:12.327776  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:08:12.327859  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:08:12.335771  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:08:12.344146  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:08:12.344202  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:08:12.352703  336341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:08:12.405135  336341 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:08:12.405218  336341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:08:12.431274  336341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:08:12.431370  336341 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:08:12.431440  336341 kubeadm.go:319] OS: Linux
	I1216 03:08:12.431500  336341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:08:12.431564  336341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:08:12.431667  336341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:08:12.431745  336341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:08:12.431891  336341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:08:12.431976  336341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:08:12.432056  336341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:08:12.432138  336341 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:08:12.509084  336341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:08:12.509223  336341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:08:12.509358  336341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:08:12.517074  336341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:08:12.519773  336341 out.go:252]   - Generating certificates and keys ...
	I1216 03:08:12.519891  336341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:08:12.520018  336341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:08:12.725300  336341 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:08:12.969403  336341 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:08:13.375727  336341 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:08:13.601180  336341 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:08:11.121289  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:11.121326  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:11.121335  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:11.121345  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:11.121352  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:11.121360  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:11.121365  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:11.121373  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:11.121388  327289 retry.go:31] will retry after 1.233440062s: missing components: kube-dns
	I1216 03:08:12.358695  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:12.358741  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:12.358752  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:12.358768  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:12.358777  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:12.358784  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:12.358790  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:12.358797  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:12.358810  327289 retry.go:31] will retry after 1.822030559s: missing components: kube-dns
	I1216 03:08:14.185955  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:14.186001  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:14.186010  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:14.186017  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:14.186026  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:14.186037  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:14.186043  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:14.186049  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:14.186070  327289 retry.go:31] will retry after 2.807521371s: missing components: kube-dns
	I1216 03:08:13.940262  336341 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:08:13.940475  336341 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:08:14.024195  336341 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:08:14.024397  336341 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:08:14.623212  336341 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:08:14.895703  336341 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:08:14.994613  336341 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:08:14.994689  336341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:08:15.087054  336341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:08:15.258741  336341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:08:15.593339  336341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:08:15.949425  336341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:08:16.302134  336341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:08:16.302611  336341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:08:16.306362  336341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:08:16.307899  336341 out.go:252]   - Booting up control plane ...
	I1216 03:08:16.307994  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:08:16.308104  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:08:16.308614  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:08:16.322329  336341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:08:16.322461  336341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:08:16.329904  336341 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:08:16.330319  336341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:08:16.330386  336341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:08:16.435554  336341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:08:16.435735  336341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:17.436293  336341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000897774s
	I1216 03:08:17.439555  336341 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:08:17.439722  336341 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:08:17.439886  336341 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:08:17.440002  336341 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:08:16.997472  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:16.997508  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:16.997514  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:16.997520  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:16.997524  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:16.997528  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:16.997531  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:16.997534  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:16.997547  327289 retry.go:31] will retry after 2.719576061s: missing components: kube-dns
	I1216 03:08:19.724380  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:19.724417  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Running
	I1216 03:08:19.724426  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:19.724433  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:19.724438  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:19.724443  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:19.724448  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:19.724453  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:19.724462  327289 system_pods.go:126] duration metric: took 13.107865367s to wait for k8s-apps to be running ...
	I1216 03:08:19.724471  327289 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:08:19.724523  327289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:19.743141  327289 system_svc.go:56] duration metric: took 18.659974ms WaitForService to wait for kubelet
	I1216 03:08:19.743173  327289 kubeadm.go:587] duration metric: took 18.981523868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:08:19.743193  327289 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:08:19.746714  327289 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:08:19.746753  327289 node_conditions.go:123] node cpu capacity is 8
	I1216 03:08:19.746777  327289 node_conditions.go:105] duration metric: took 3.578184ms to run NodePressure ...
	I1216 03:08:19.746793  327289 start.go:242] waiting for startup goroutines ...
	I1216 03:08:19.746809  327289 start.go:247] waiting for cluster config update ...
	I1216 03:08:19.746838  327289 start.go:256] writing updated cluster config ...
	I1216 03:08:19.747161  327289 ssh_runner.go:195] Run: rm -f paused
	I1216 03:08:19.751784  327289 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:19.756263  327289 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5jz9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.761149  327289 pod_ready.go:94] pod "coredns-66bc5c9577-5jz9m" is "Ready"
	I1216 03:08:19.761174  327289 pod_ready.go:86] duration metric: took 4.886245ms for pod "coredns-66bc5c9577-5jz9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.763455  327289 pod_ready.go:83] waiting for pod "etcd-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.767847  327289 pod_ready.go:94] pod "etcd-custom-flannel-646016" is "Ready"
	I1216 03:08:19.767871  327289 pod_ready.go:86] duration metric: took 4.393657ms for pod "etcd-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.769899  327289 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.774343  327289 pod_ready.go:94] pod "kube-apiserver-custom-flannel-646016" is "Ready"
	I1216 03:08:19.774365  327289 pod_ready.go:86] duration metric: took 4.448057ms for pod "kube-apiserver-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.776453  327289 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.156759  327289 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-646016" is "Ready"
	I1216 03:08:20.156792  327289 pod_ready.go:86] duration metric: took 380.316816ms for pod "kube-controller-manager-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.356883  327289 pod_ready.go:83] waiting for pod "kube-proxy-6wswf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.756577  327289 pod_ready.go:94] pod "kube-proxy-6wswf" is "Ready"
	I1216 03:08:20.756610  327289 pod_ready.go:86] duration metric: took 399.701773ms for pod "kube-proxy-6wswf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.957789  327289 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:21.356423  327289 pod_ready.go:94] pod "kube-scheduler-custom-flannel-646016" is "Ready"
	I1216 03:08:21.356519  327289 pod_ready.go:86] duration metric: took 398.643487ms for pod "kube-scheduler-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:21.356550  327289 pod_ready.go:40] duration metric: took 1.604733623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:21.405898  327289 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:21.407757  327289 out.go:179] * Done! kubectl is now configured to use "custom-flannel-646016" cluster and "default" namespace by default
	I1216 03:08:19.136994  336341 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.697392724s
	I1216 03:08:19.664959  336341 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.22534115s
	I1216 03:08:21.441753  336341 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002188144s
	I1216 03:08:21.460160  336341 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:08:21.470163  336341 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:08:21.480737  336341 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:08:21.481078  336341 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:08:21.489303  336341 kubeadm.go:319] [bootstrap-token] Using token: 4e611j.sp5pjcqpogdnm1bn
	I1216 03:08:21.490576  336341 out.go:252]   - Configuring RBAC rules ...
	I1216 03:08:21.490726  336341 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:08:21.494174  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:08:21.500362  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:08:21.502894  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:08:21.505166  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:08:21.507914  336341 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:08:21.851029  336341 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:08:22.260856  336341 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:08:22.848457  336341 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:08:22.849671  336341 kubeadm.go:319] 
	I1216 03:08:22.849766  336341 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:08:22.849779  336341 kubeadm.go:319] 
	I1216 03:08:22.849913  336341 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:08:22.849925  336341 kubeadm.go:319] 
	I1216 03:08:22.849954  336341 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:08:22.850035  336341 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:08:22.850104  336341 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:08:22.850110  336341 kubeadm.go:319] 
	I1216 03:08:22.850182  336341 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:08:22.850187  336341 kubeadm.go:319] 
	I1216 03:08:22.850255  336341 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:08:22.850261  336341 kubeadm.go:319] 
	I1216 03:08:22.850330  336341 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:08:22.850430  336341 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:08:22.850522  336341 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:08:22.850527  336341 kubeadm.go:319] 
	I1216 03:08:22.850637  336341 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:08:22.850747  336341 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:08:22.850754  336341 kubeadm.go:319] 
	I1216 03:08:22.850873  336341 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4e611j.sp5pjcqpogdnm1bn \
	I1216 03:08:22.851009  336341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:08:22.851038  336341 kubeadm.go:319] 	--control-plane 
	I1216 03:08:22.851043  336341 kubeadm.go:319] 
	I1216 03:08:22.851146  336341 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:08:22.851150  336341 kubeadm.go:319] 
	I1216 03:08:22.851230  336341 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4e611j.sp5pjcqpogdnm1bn \
	I1216 03:08:22.851359  336341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:08:22.855155  336341 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:08:22.855298  336341 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:22.855486  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:08:22.857898  336341 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:08:22.859410  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:08:22.870592  336341 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:08:22.888623  336341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:08:22.888779  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:22.888873  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-646016 minikube.k8s.io/updated_at=2025_12_16T03_08_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=enable-default-cni-646016 minikube.k8s.io/primary=true
	I1216 03:08:22.991871  336341 ops.go:34] apiserver oom_adj: -16
	I1216 03:08:22.992025  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:23.492377  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 16 03:08:00 embed-certs-742794 crio[564]: time="2025-12-16T03:08:00.924287624Z" level=info msg="Started container" PID=1761 containerID=9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper id=87ab52d7-efd7-4241-9d12-18e2b050d7b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=afcd90dfd8d85707de5dda050fb2190b08d57502c1d6e7eba8fef4d985c781eb
	Dec 16 03:08:01 embed-certs-742794 crio[564]: time="2025-12-16T03:08:01.022216278Z" level=info msg="Removing container: 1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93" id=2f79ad87-2596-44c5-9ad4-763ac97ebe93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:01 embed-certs-742794 crio[564]: time="2025-12-16T03:08:01.036351631Z" level=info msg="Removed container 1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=2f79ad87-2596-44c5-9ad4-763ac97ebe93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.038611565Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb616de5-c9d9-4834-84d5-4fe72844629f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.039601853Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4427f666-a22c-4ae4-935d-00b6d1117880 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.040918119Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5376f624-1030-4d51-8206-1cc71ee5517e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.041064961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045668974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045885966Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bd25dbb6165f48a14ffeabe995bc2b5e86b95fb4d6b4cb4bd8e3a7feaebccabf/merged/etc/passwd: no such file or directory"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045920576Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bd25dbb6165f48a14ffeabe995bc2b5e86b95fb4d6b4cb4bd8e3a7feaebccabf/merged/etc/group: no such file or directory"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.046219339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.0703352Z" level=info msg="Created container d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68: kube-system/storage-provisioner/storage-provisioner" id=5376f624-1030-4d51-8206-1cc71ee5517e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.071115078Z" level=info msg="Starting container: d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68" id=90deab09-4164-4471-9d60-df6f4b654ed8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.072910758Z" level=info msg="Started container" PID=1775 containerID=d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68 description=kube-system/storage-provisioner/storage-provisioner id=90deab09-4164-4471-9d60-df6f4b654ed8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dd5ba515886d0fd16fab845092d737393a27e263a9fdc75b7513c6eb1890474
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.872508173Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a144c1d1-a479-4b6f-8cb2-1f8ceea8f873 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.873431808Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78ab26bf-df26-43fe-a945-314e8d94ed3d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.874503162Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=3b329f92-5e4c-4cc4-86ec-4f625e4d631a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.874636954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.880603022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.881101461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.913116424Z" level=info msg="Created container 0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=3b329f92-5e4c-4cc4-86ec-4f625e4d631a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.913898323Z" level=info msg="Starting container: 0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6" id=680e1089-df14-4062-adfb-9fd0ed89c854 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.91610449Z" level=info msg="Started container" PID=1811 containerID=0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper id=680e1089-df14-4062-adfb-9fd0ed89c854 name=/runtime.v1.RuntimeService/StartContainer sandboxID=afcd90dfd8d85707de5dda050fb2190b08d57502c1d6e7eba8fef4d985c781eb
	Dec 16 03:08:24 embed-certs-742794 crio[564]: time="2025-12-16T03:08:24.083983047Z" level=info msg="Removing container: 9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f" id=e72937be-d42c-4871-bf7f-3dbb69394f99 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:24 embed-certs-742794 crio[564]: time="2025-12-16T03:08:24.09734664Z" level=info msg="Removed container 9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=e72937be-d42c-4871-bf7f-3dbb69394f99 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0916cce970194       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   afcd90dfd8d85       dashboard-metrics-scraper-6ffb444bf9-g2wm6   kubernetes-dashboard
	d82d7118ed087       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   0dd5ba515886d       storage-provisioner                          kube-system
	424c3093fc615       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   b63f0ed462484       kubernetes-dashboard-855c9754f9-4srjf        kubernetes-dashboard
	42861ed8183ec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   9e557055aa88a       coredns-66bc5c9577-rz62v                     kube-system
	cfa66554ca3d3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   354833035f347       busybox                                      default
	9eec54ef0eb86       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   36f4ca3077f31       kindnet-7vrj8                                kube-system
	ab93683ff228d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   6841ed9e4bc94       kube-proxy-899tv                             kube-system
	7ec84b1f0e67e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   0dd5ba515886d       storage-provisioner                          kube-system
	cf6f05491bb98       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           53 seconds ago      Running             kube-scheduler              0                   01847433d40e7       kube-scheduler-embed-certs-742794            kube-system
	a181636c6acb9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           53 seconds ago      Running             kube-apiserver              0                   63d651ca988a6       kube-apiserver-embed-certs-742794            kube-system
	667d4cacc5909       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   fc165b9de898a       etcd-embed-certs-742794                      kube-system
	81e653c21515d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           53 seconds ago      Running             kube-controller-manager     0                   63f2a0774f6cf       kube-controller-manager-embed-certs-742794   kube-system
	
	
	==> coredns [42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39399 - 36200 "HINFO IN 3177294149401379934.7077990097688895753. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014618639s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-742794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-742794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=embed-certs-742794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:06:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-742794
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-742794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                227aaafb-25e6-44ee-81ce-b7feaed19af9
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-rz62v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-embed-certs-742794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-7vrj8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-embed-certs-742794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-742794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-899tv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-embed-certs-742794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g2wm6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4srjf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node embed-certs-742794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node embed-certs-742794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node embed-certs-742794 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node embed-certs-742794 event: Registered Node embed-certs-742794 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-742794 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-742794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-742794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-742794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node embed-certs-742794 event: Registered Node embed-certs-742794 in Controller
	
	
	==> dmesg <==
	[  +0.088842] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025418] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071144] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	
	
	==> etcd [667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf] <==
	{"level":"warn","ts":"2025-12-16T03:07:36.701771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.721730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.721717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.733404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.741007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.747888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.762297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.778423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.786738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.802740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.814603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.822929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.830389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.836739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.843372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.849856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.856219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.863971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.876897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.883411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.890196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.935118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56804","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:07:57.923697Z","caller":"traceutil/trace.go:172","msg":"trace[606713941] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"111.438234ms","start":"2025-12-16T03:07:57.812240Z","end":"2025-12-16T03:07:57.923679Z","steps":["trace[606713941] 'process raft request'  (duration: 111.101859ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:08:10.667751Z","caller":"traceutil/trace.go:172","msg":"trace[729046098] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"121.35559ms","start":"2025-12-16T03:08:10.546365Z","end":"2025-12-16T03:08:10.667720Z","steps":["trace[729046098] 'process raft request'  (duration: 121.320874ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:08:10.667791Z","caller":"traceutil/trace.go:172","msg":"trace[761896218] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"122.407605ms","start":"2025-12-16T03:08:10.545358Z","end":"2025-12-16T03:08:10.667766Z","steps":["trace[761896218] 'process raft request'  (duration: 122.227214ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:08:28 up 50 min,  0 user,  load average: 4.71, 3.78, 2.40
	Linux embed-certs-742794 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639] <==
	I1216 03:07:38.481269       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:07:38.481534       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:07:38.481694       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:07:38.481716       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:07:38.481741       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:07:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:07:38.879865       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:07:38.880000       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:07:38.880026       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:07:38.880226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:07:39.180265       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:07:39.180305       1 metrics.go:72] Registering metrics
	I1216 03:07:39.180386       1 controller.go:711] "Syncing nftables rules"
	I1216 03:07:48.683713       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:07:48.683949       1 main.go:301] handling current node
	I1216 03:07:58.686998       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:07:58.687049       1 main.go:301] handling current node
	I1216 03:08:08.684092       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:08:08.684154       1 main.go:301] handling current node
	I1216 03:08:18.689929       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:08:18.689968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17] <==
	I1216 03:07:37.420546       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:07:37.420371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:07:37.420589       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:07:37.420600       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:07:37.420610       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:07:37.420616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:07:37.420622       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:07:37.420786       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 03:07:37.420786       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:07:37.420472       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:07:37.420992       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:07:37.427785       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:07:37.442838       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:07:37.445641       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:07:37.661992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:07:37.689316       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:07:37.709234       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:07:37.716309       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:07:37.723712       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:07:37.757362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.207.243"}
	I1216 03:07:37.769025       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.150.218"}
	I1216 03:07:38.327423       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:07:41.102942       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:07:41.303534       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:07:41.353267       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823] <==
	I1216 03:07:40.750579       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 03:07:40.752332       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 03:07:40.754140       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:07:40.754196       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:07:40.754239       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:07:40.754247       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:07:40.754254       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:07:40.754490       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 03:07:40.756518       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:07:40.757520       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:07:40.758378       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 03:07:40.760869       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:07:40.761016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:07:40.764319       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:07:40.764501       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:07:40.764593       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-742794"
	I1216 03:07:40.764662       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:07:40.766427       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 03:07:40.767617       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:07:40.769507       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 03:07:40.771910       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 03:07:40.773101       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:07:40.779269       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:07:40.780759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:07:40.784997       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02] <==
	I1216 03:07:38.285296       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:07:38.356357       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:07:38.456919       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:07:38.456958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 03:07:38.457080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:07:38.482139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:07:38.482208       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:07:38.489329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:07:38.489909       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:07:38.489984       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:07:38.492036       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:07:38.492121       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:07:38.492173       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:07:38.492180       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:07:38.492082       1 config.go:200] "Starting service config controller"
	I1216 03:07:38.492196       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:07:38.492279       1 config.go:309] "Starting node config controller"
	I1216 03:07:38.492291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:07:38.492298       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:07:38.592335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:07:38.592409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:07:38.592411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291] <==
	I1216 03:07:35.332963       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:07:37.348497       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:07:37.348644       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:07:37.348663       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:07:37.348674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:07:37.383236       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:07:37.383263       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:07:37.386506       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:07:37.386571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:07:37.386795       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:07:37.386582       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:07:37.487102       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:07:41 embed-certs-742794 kubelet[726]: I1216 03:07:41.273070     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttvk\" (UniqueName: \"kubernetes.io/projected/0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6-kube-api-access-dttvk\") pod \"kubernetes-dashboard-855c9754f9-4srjf\" (UID: \"0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4srjf"
	Dec 16 03:07:44 embed-certs-742794 kubelet[726]: I1216 03:07:44.953639     726 scope.go:117] "RemoveContainer" containerID="0a508415f491dddd399750df78e09cc673a2434fb933adafa466abd31e00c266"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: I1216 03:07:45.964413     726 scope.go:117] "RemoveContainer" containerID="0a508415f491dddd399750df78e09cc673a2434fb933adafa466abd31e00c266"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: I1216 03:07:45.964672     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: E1216 03:07:45.965175     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:46 embed-certs-742794 kubelet[726]: I1216 03:07:46.972286     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:46 embed-certs-742794 kubelet[726]: E1216 03:07:46.972513     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:49 embed-certs-742794 kubelet[726]: I1216 03:07:49.211061     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:49 embed-certs-742794 kubelet[726]: E1216 03:07:49.211310     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:52 embed-certs-742794 kubelet[726]: I1216 03:07:52.117522     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4srjf" podStartSLOduration=4.174256948 podStartE2EDuration="11.117499003s" podCreationTimestamp="2025-12-16 03:07:41 +0000 UTC" firstStartedPulling="2025-12-16 03:07:41.498416602 +0000 UTC m=+7.734941453" lastFinishedPulling="2025-12-16 03:07:48.441658668 +0000 UTC m=+14.678183508" observedRunningTime="2025-12-16 03:07:48.988181159 +0000 UTC m=+15.224706025" watchObservedRunningTime="2025-12-16 03:07:52.117499003 +0000 UTC m=+18.354023862"
	Dec 16 03:08:00 embed-certs-742794 kubelet[726]: I1216 03:08:00.871679     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: I1216 03:08:01.014880     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: I1216 03:08:01.015288     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: E1216 03:08:01.015492     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: I1216 03:08:09.038202     726 scope.go:117] "RemoveContainer" containerID="7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: I1216 03:08:09.211670     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: E1216 03:08:09.211900     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:23 embed-certs-742794 kubelet[726]: I1216 03:08:23.872038     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: I1216 03:08:24.082667     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: I1216 03:08:24.082924     726 scope.go:117] "RemoveContainer" containerID="0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: E1216 03:08:24.083135     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: kubelet.service: Consumed 1.711s CPU time.
	
	
	==> kubernetes-dashboard [424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17] <==
	2025/12/16 03:07:48 Using namespace: kubernetes-dashboard
	2025/12/16 03:07:48 Using in-cluster config to connect to apiserver
	2025/12/16 03:07:48 Using secret token for csrf signing
	2025/12/16 03:07:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:07:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:07:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 03:07:48 Generating JWE encryption key
	2025/12/16 03:07:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:07:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:07:48 Initializing JWE encryption key from synchronized object
	2025/12/16 03:07:48 Creating in-cluster Sidecar client
	2025/12/16 03:07:48 Serving insecurely on HTTP port: 9090
	2025/12/16 03:07:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:08:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:07:48 Starting overwatch
	
	
	==> storage-provisioner [7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f] <==
	I1216 03:07:38.235428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:08:08.239057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68] <==
	I1216 03:08:09.086372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:08:09.094036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:08:09.094078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:08:09.096160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:12.550994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:16.811788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:20.410373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:23.464941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.486780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.492440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:08:26.492626       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:08:26.492811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85!
	I1216 03:08:26.492810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcdc7c73-4d43-45a4-8fda-ffef275cc1fa", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85 became leader
	W1216 03:08:26.497182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.506031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:08:26.593557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742794 -n embed-certs-742794: exit status 2 (370.348316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-742794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-742794
helpers_test.go:244: (dbg) docker inspect embed-certs-742794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	        "Created": "2025-12-16T03:06:20.456549573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324847,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:07:25.835474606Z",
	            "FinishedAt": "2025-12-16T03:07:24.883717353Z"
	        },
	        "Image": "sha256:ac6d1a87b942856a6395a8f45072aa18f86cb94c30fd27d38c9c7a95c8d05c96",
	        "ResolvConfPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/hosts",
	        "LogPath": "/var/lib/docker/containers/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3/913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3-json.log",
	        "Name": "/embed-certs-742794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-742794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-742794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "913c75f545a3465ca2173f9eb9ca64cbd435c435ba015aed5e4a21e007a127f3",
	                "LowerDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20-init/diff:/var/lib/docker/overlay2/a0869d7551e63609fc7ff32d569f01b8186acf47acc689f6fe6d87bb22daab72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfd60d4d053719c3a15e0e613ec6cdd39f07896fe862376dc73b344781a89f20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-742794",
	                "Source": "/var/lib/docker/volumes/embed-certs-742794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-742794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-742794",
	                "name.minikube.sigs.k8s.io": "embed-certs-742794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7669ce21782551ed65166fd9b66c65b66dd6a81497eca740f379a908267d1f5b",
	            "SandboxKey": "/var/run/docker/netns/7669ce217825",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-742794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "698574664c58f66fc30ac38bce099a4a38e50897a8947172848cad9a06889288",
	                    "EndpointID": "082efb495964d8232a5d69a037e1a49a457ce1fdf300076b32c70484139f1961",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "06:41:33:99:82:c7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-742794",
	                        "913c75f545a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794: exit status 2 (370.82479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-742794 logs -n 25: (1.182125466s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-646016 sudo systemctl status cri-docker --all --full --no-pager                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat cri-docker --no-pager                                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                 │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                           │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cri-dockerd --version                                                                                                                    │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl status containerd --all --full --no-pager                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat containerd --no-pager                                                                                                      │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /lib/systemd/system/containerd.service                                                                                               │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo cat /etc/containerd/config.toml                                                                                                          │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo containerd config dump                                                                                                                   │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl status crio --all --full --no-pager                                                                                            │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo systemctl cat crio --no-pager                                                                                                            │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ ssh     │ -p kindnet-646016 sudo crio config                                                                                                                              │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ delete  │ -p kindnet-646016                                                                                                                                               │ kindnet-646016            │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │ 16 Dec 25 03:07 UTC │
	│ start   │ -p enable-default-cni-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-646016 │ jenkins │ v1.37.0 │ 16 Dec 25 03:07 UTC │                     │
	│ ssh     │ -p calico-646016 pgrep -a kubelet                                                                                                                               │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p custom-flannel-646016 pgrep -a kubelet                                                                                                                       │ custom-flannel-646016     │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ image   │ embed-certs-742794 image list --format=json                                                                                                                     │ embed-certs-742794        │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ pause   │ -p embed-certs-742794 --alsologtostderr -v=1                                                                                                                    │ embed-certs-742794        │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ ssh     │ -p calico-646016 sudo cat /etc/nsswitch.conf                                                                                                                    │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p calico-646016 sudo cat /etc/hosts                                                                                                                            │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p calico-646016 sudo cat /etc/resolv.conf                                                                                                                      │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p calico-646016 sudo crictl pods                                                                                                                               │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ -p calico-646016 sudo crictl ps --all                                                                                                                           │ calico-646016             │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:07:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:07:58.870309  336341 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:07:58.870563  336341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:58.870571  336341 out.go:374] Setting ErrFile to fd 2...
	I1216 03:07:58.870575  336341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:07:58.870753  336341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:07:58.871237  336341 out.go:368] Setting JSON to false
	I1216 03:07:58.872385  336341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3031,"bootTime":1765851448,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:07:58.872438  336341 start.go:143] virtualization: kvm guest
	I1216 03:07:58.874201  336341 out.go:179] * [enable-default-cni-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:07:58.875627  336341 notify.go:221] Checking for updates...
	I1216 03:07:58.875649  336341 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:07:58.876920  336341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:07:58.878010  336341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:07:58.879034  336341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:07:58.880178  336341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:07:58.881299  336341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:07:58.882829  336341 config.go:182] Loaded profile config "calico-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.882939  336341 config.go:182] Loaded profile config "custom-flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.883018  336341 config.go:182] Loaded profile config "embed-certs-742794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:07:58.883149  336341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:07:58.908129  336341 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:07:58.908298  336341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:07:58.966023  336341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:07:58.956109368 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:07:58.966170  336341 docker.go:319] overlay module found
	I1216 03:07:58.968494  336341 out.go:179] * Using the docker driver based on user configuration
	I1216 03:07:58.969637  336341 start.go:309] selected driver: docker
	I1216 03:07:58.969652  336341 start.go:927] validating driver "docker" against <nil>
	I1216 03:07:58.969663  336341 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:07:58.970241  336341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:07:59.026396  336341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:07:59.016299896 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:07:59.026582  336341 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1216 03:07:59.026769  336341 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1216 03:07:59.026791  336341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:07:59.028294  336341 out.go:179] * Using Docker driver with root privileges
	I1216 03:07:59.029566  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:07:59.029585  336341 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:07:59.029655  336341 start.go:353] cluster config:
	{Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:07:59.030954  336341 out.go:179] * Starting "enable-default-cni-646016" primary control-plane node in "enable-default-cni-646016" cluster
	I1216 03:07:59.032126  336341 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 03:07:59.033278  336341 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:07:59.034348  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:07:59.034383  336341 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:07:59.034392  336341 cache.go:65] Caching tarball of preloaded images
	I1216 03:07:59.034455  336341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:07:59.034496  336341 preload.go:238] Found /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:07:59.034506  336341 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:07:59.034588  336341 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json ...
	I1216 03:07:59.034609  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json: {Name:mk6c27771f22d38d86886e3d238898d3e2df8a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:07:59.055959  336341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:07:59.055982  336341 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:07:59.056002  336341 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:07:59.056038  336341 start.go:360] acquireMachinesLock for enable-default-cni-646016: {Name:mkf063c9177dae20297f3317db3407e754fd69de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:07:59.056153  336341 start.go:364] duration metric: took 93.846µs to acquireMachinesLock for "enable-default-cni-646016"
	I1216 03:07:59.056182  336341 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:07:59.056276  336341 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:07:56.133147  327289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 03:07:56.133202  327289 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1216 03:07:56.138499  327289 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1216 03:07:56.138527  327289 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1216 03:07:56.162750  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 03:07:56.574716  327289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:07:56.574807  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:56.574957  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-646016 minikube.k8s.io/updated_at=2025_12_16T03_07_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=custom-flannel-646016 minikube.k8s.io/primary=true
	I1216 03:07:56.666788  327289 ops.go:34] apiserver oom_adj: -16
	I1216 03:07:56.666933  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:57.167157  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:57.667265  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:58.168030  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:58.668058  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:59.167194  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:07:59.667518  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:00.167850  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1216 03:07:56.457839  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:07:58.955801  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:00.667299  327289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:00.758680  327289 kubeadm.go:1114] duration metric: took 4.183927417s to wait for elevateKubeSystemPrivileges
	I1216 03:08:00.758717  327289 kubeadm.go:403] duration metric: took 18.963447707s to StartCluster
	I1216 03:08:00.758740  327289 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:00.758809  327289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:08:00.761309  327289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:00.761599  327289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:08:00.761619  327289 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:08:00.761934  327289 config.go:182] Loaded profile config "custom-flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:00.761995  327289 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:08:00.762066  327289 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-646016"
	I1216 03:08:00.762085  327289 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-646016"
	I1216 03:08:00.762122  327289 host.go:66] Checking if "custom-flannel-646016" exists ...
	I1216 03:08:00.762192  327289 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-646016"
	I1216 03:08:00.762213  327289 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-646016"
	I1216 03:08:00.762559  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.762647  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.764242  327289 out.go:179] * Verifying Kubernetes components...
	I1216 03:08:00.765850  327289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:00.791792  327289 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:08:00.793804  327289 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:00.793918  327289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:08:00.793983  327289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646016
	I1216 03:08:00.794638  327289 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-646016"
	I1216 03:08:00.794717  327289 host.go:66] Checking if "custom-flannel-646016" exists ...
	I1216 03:08:00.795259  327289 cli_runner.go:164] Run: docker container inspect custom-flannel-646016 --format={{.State.Status}}
	I1216 03:08:00.829131  327289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/custom-flannel-646016/id_rsa Username:docker}
	I1216 03:08:00.831644  327289 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:00.831664  327289 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:08:00.831849  327289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-646016
	I1216 03:08:00.859274  327289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/custom-flannel-646016/id_rsa Username:docker}
	I1216 03:08:00.884477  327289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:08:00.949486  327289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:08:00.958236  327289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:00.981285  327289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:01.088041  327289 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 03:08:01.089603  327289 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-646016" to be "Ready" ...
	I1216 03:08:01.335208  327289 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:07:57.351096  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:07:57.351139  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:07:57.351151  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:07:57.351162  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:07:57.351171  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:07:57.351179  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:07:57.351188  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:07:57.351195  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:07:57.351203  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:07:57.351210  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:07:57.351231  320124 retry.go:31] will retry after 1.419549421s: missing components: kube-dns
	I1216 03:07:58.775386  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:07:58.775423  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:07:58.775436  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:07:58.775445  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:07:58.775451  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:07:58.775459  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:07:58.775466  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:07:58.775475  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:07:58.775481  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:07:58.775490  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:07:58.775507  320124 retry.go:31] will retry after 2.893963506s: missing components: kube-dns
	I1216 03:07:59.058511  336341 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:07:59.058785  336341 start.go:159] libmachine.API.Create for "enable-default-cni-646016" (driver="docker")
	I1216 03:07:59.058848  336341 client.go:173] LocalClient.Create starting
	I1216 03:07:59.058923  336341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem
	I1216 03:07:59.058972  336341 main.go:143] libmachine: Decoding PEM data...
	I1216 03:07:59.059030  336341 main.go:143] libmachine: Parsing certificate...
	I1216 03:07:59.059094  336341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem
	I1216 03:07:59.059134  336341 main.go:143] libmachine: Decoding PEM data...
	I1216 03:07:59.059153  336341 main.go:143] libmachine: Parsing certificate...
	I1216 03:07:59.059504  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:07:59.076671  336341 cli_runner.go:211] docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:07:59.076763  336341 network_create.go:284] running [docker network inspect enable-default-cni-646016] to gather additional debugging logs...
	I1216 03:07:59.076789  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016
	W1216 03:07:59.094113  336341 cli_runner.go:211] docker network inspect enable-default-cni-646016 returned with exit code 1
	I1216 03:07:59.094139  336341 network_create.go:287] error running [docker network inspect enable-default-cni-646016]: docker network inspect enable-default-cni-646016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-646016 not found
	I1216 03:07:59.094165  336341 network_create.go:289] output of [docker network inspect enable-default-cni-646016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-646016 not found
	
	** /stderr **
	I1216 03:07:59.094250  336341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:07:59.114027  336341 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
	I1216 03:07:59.114674  336341 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88a956106d89 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:d8:2d:33:44:e5} reservation:<nil>}
	I1216 03:07:59.115393  336341 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa5eb281ed4e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:29:47:5d:c3:fb} reservation:<nil>}
	I1216 03:07:59.116210  336341 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e399e0}
	I1216 03:07:59.116232  336341 network_create.go:124] attempt to create docker network enable-default-cni-646016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 03:07:59.116303  336341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-646016 enable-default-cni-646016
	I1216 03:07:59.165940  336341 network_create.go:108] docker network enable-default-cni-646016 192.168.76.0/24 created
	I1216 03:07:59.165976  336341 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-646016" container
	I1216 03:07:59.166041  336341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:07:59.187672  336341 cli_runner.go:164] Run: docker volume create enable-default-cni-646016 --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:07:59.208837  336341 oci.go:103] Successfully created a docker volume enable-default-cni-646016
	I1216 03:07:59.208935  336341 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-646016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --entrypoint /usr/bin/test -v enable-default-cni-646016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:07:59.848961  336341 oci.go:107] Successfully prepared a docker volume enable-default-cni-646016
	I1216 03:07:59.849035  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:07:59.849049  336341 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 03:07:59.849132  336341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 03:08:01.336652  327289 addons.go:530] duration metric: took 574.654324ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:08:01.592793  327289 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-646016" context rescaled to 1 replicas
	W1216 03:08:03.093147  327289 node_ready.go:57] node "custom-flannel-646016" has "Ready":"False" status (will retry)
	W1216 03:08:05.100148  327289 node_ready.go:57] node "custom-flannel-646016" has "Ready":"False" status (will retry)
	W1216 03:08:00.959607  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:03.456272  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:05.457380  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:01.674959  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:08:01.674995  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:08:01.675006  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:08:01.675018  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:01.675025  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:08:01.675031  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:08:01.675036  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:08:01.675042  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:08:01.675054  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:08:01.675059  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:08:01.675080  320124 retry.go:31] will retry after 2.852841152s: missing components: kube-dns
	I1216 03:08:04.537431  320124 system_pods.go:86] 9 kube-system pods found
	I1216 03:08:04.537640  320124 system_pods.go:89] "calico-kube-controllers-5c676f698c-czb6r" [02440fb8-81fa-4227-aa95-5cb6737da80b] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 03:08:04.537656  320124 system_pods.go:89] "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 03:08:04.537665  320124 system_pods.go:89] "coredns-66bc5c9577-dvcwp" [c112b208-e87c-4f35-9a48-702fa7fa32e0] Running
	I1216 03:08:04.537682  320124 system_pods.go:89] "etcd-calico-646016" [8e229aa6-8406-4181-80df-44963ade4b03] Running
	I1216 03:08:04.537691  320124 system_pods.go:89] "kube-apiserver-calico-646016" [4a36764f-92ef-47ac-858e-c52686f4664f] Running
	I1216 03:08:04.537697  320124 system_pods.go:89] "kube-controller-manager-calico-646016" [379f896c-e078-401c-8d8a-7c1785ccdab6] Running
	I1216 03:08:04.537706  320124 system_pods.go:89] "kube-proxy-ztq2k" [70c0df76-8996-4837-b8ce-6dece1358f47] Running
	I1216 03:08:04.537714  320124 system_pods.go:89] "kube-scheduler-calico-646016" [0674d349-787a-42da-90ba-e5288233f0e8] Running
	I1216 03:08:04.537719  320124 system_pods.go:89] "storage-provisioner" [cd094bbd-7e58-4bbb-8990-7785d7a9c9ef] Running
	I1216 03:08:04.537729  320124 system_pods.go:126] duration metric: took 13.480139087s to wait for k8s-apps to be running ...
	I1216 03:08:04.537739  320124 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:08:04.537790  320124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:04.560920  320124 system_svc.go:56] duration metric: took 23.161939ms WaitForService to wait for kubelet
	I1216 03:08:04.561035  320124 kubeadm.go:587] duration metric: took 19.524702292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:08:04.561063  320124 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:08:04.565631  320124 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:08:04.565654  320124 node_conditions.go:123] node cpu capacity is 8
	I1216 03:08:04.565672  320124 node_conditions.go:105] duration metric: took 4.60317ms to run NodePressure ...
	I1216 03:08:04.565683  320124 start.go:242] waiting for startup goroutines ...
	I1216 03:08:04.565690  320124 start.go:247] waiting for cluster config update ...
	I1216 03:08:04.565701  320124 start.go:256] writing updated cluster config ...
	I1216 03:08:04.565967  320124 ssh_runner.go:195] Run: rm -f paused
	I1216 03:08:04.572305  320124 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:04.577689  320124 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dvcwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.584215  320124 pod_ready.go:94] pod "coredns-66bc5c9577-dvcwp" is "Ready"
	I1216 03:08:04.584244  320124 pod_ready.go:86] duration metric: took 6.531144ms for pod "coredns-66bc5c9577-dvcwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.587287  320124 pod_ready.go:83] waiting for pod "etcd-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.592761  320124 pod_ready.go:94] pod "etcd-calico-646016" is "Ready"
	I1216 03:08:04.592783  320124 pod_ready.go:86] duration metric: took 5.472316ms for pod "etcd-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.595293  320124 pod_ready.go:83] waiting for pod "kube-apiserver-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.602038  320124 pod_ready.go:94] pod "kube-apiserver-calico-646016" is "Ready"
	I1216 03:08:04.602066  320124 pod_ready.go:86] duration metric: took 6.751105ms for pod "kube-apiserver-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.604634  320124 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:04.979875  320124 pod_ready.go:94] pod "kube-controller-manager-calico-646016" is "Ready"
	I1216 03:08:04.979916  320124 pod_ready.go:86] duration metric: took 375.256086ms for pod "kube-controller-manager-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.178655  320124 pod_ready.go:83] waiting for pod "kube-proxy-ztq2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.578507  320124 pod_ready.go:94] pod "kube-proxy-ztq2k" is "Ready"
	I1216 03:08:05.578604  320124 pod_ready.go:86] duration metric: took 399.91786ms for pod "kube-proxy-ztq2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:05.778127  320124 pod_ready.go:83] waiting for pod "kube-scheduler-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:06.179400  320124 pod_ready.go:94] pod "kube-scheduler-calico-646016" is "Ready"
	I1216 03:08:06.179431  320124 pod_ready.go:86] duration metric: took 401.270254ms for pod "kube-scheduler-calico-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:06.179445  320124 pod_ready.go:40] duration metric: took 1.607109772s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:06.239728  320124 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:06.241655  320124 out.go:179] * Done! kubectl is now configured to use "calico-646016" cluster and "default" namespace by default
	I1216 03:08:04.240161  336341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-646016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.390971755s)
	I1216 03:08:04.240198  336341 kic.go:203] duration metric: took 4.391146443s to extract preloaded images to volume ...
	W1216 03:08:04.240290  336341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 03:08:04.240327  336341 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 03:08:04.240378  336341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:08:04.347019  336341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-646016 --name enable-default-cni-646016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-646016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-646016 --network enable-default-cni-646016 --ip 192.168.76.2 --volume enable-default-cni-646016:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:08:04.725399  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Running}}
	I1216 03:08:04.747429  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:04.767774  336341 cli_runner.go:164] Run: docker exec enable-default-cni-646016 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:08:04.820679  336341 oci.go:144] the created container "enable-default-cni-646016" has a running status.
	I1216 03:08:04.820717  336341 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa...
	I1216 03:08:04.902133  336341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:08:04.940385  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:04.965564  336341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:08:04.965587  336341 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-646016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:08:05.045035  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:05.071230  336341 machine.go:94] provisionDockerMachine start ...
	I1216 03:08:05.071450  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:05.101380  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:05.101780  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:05.101796  336341 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:08:05.102645  336341 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 03:08:08.244459  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646016
	
	I1216 03:08:08.244492  336341 ubuntu.go:182] provisioning hostname "enable-default-cni-646016"
	I1216 03:08:08.244559  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.263354  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.263646  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.263672  336341 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-646016 && echo "enable-default-cni-646016" | sudo tee /etc/hostname
	I1216 03:08:08.411933  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-646016
	
	I1216 03:08:08.412014  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.431195  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.431431  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.431450  336341 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-646016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-646016/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-646016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:08:08.568475  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:08:08.568503  336341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5058/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5058/.minikube}
	I1216 03:08:08.568522  336341 ubuntu.go:190] setting up certificates
	I1216 03:08:08.568531  336341 provision.go:84] configureAuth start
	I1216 03:08:08.568579  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:08.586777  336341 provision.go:143] copyHostCerts
	I1216 03:08:08.586879  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem, removing ...
	I1216 03:08:08.586901  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem
	I1216 03:08:08.586989  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/key.pem (1679 bytes)
	I1216 03:08:08.587100  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem, removing ...
	I1216 03:08:08.587112  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem
	I1216 03:08:08.587153  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/ca.pem (1078 bytes)
	I1216 03:08:08.587233  336341 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem, removing ...
	I1216 03:08:08.587243  336341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem
	I1216 03:08:08.587281  336341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5058/.minikube/cert.pem (1123 bytes)
	I1216 03:08:08.587348  336341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-646016 san=[127.0.0.1 192.168.76.2 enable-default-cni-646016 localhost minikube]
	I1216 03:08:08.686898  336341 provision.go:177] copyRemoteCerts
	I1216 03:08:08.686955  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:08:08.686998  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:08.706623  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:08.806406  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 03:08:08.827183  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:08:08.846746  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:08:08.865507  336341 provision.go:87] duration metric: took 296.955017ms to configureAuth
	I1216 03:08:08.865539  336341 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:08:08.865719  336341 config.go:182] Loaded profile config "enable-default-cni-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:08.865849  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:06.592923  327289 node_ready.go:49] node "custom-flannel-646016" is "Ready"
	I1216 03:08:06.592950  327289 node_ready.go:38] duration metric: took 5.50331992s for node "custom-flannel-646016" to be "Ready" ...
	I1216 03:08:06.592963  327289 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:08:06.593019  327289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:08:06.604775  327289 api_server.go:72] duration metric: took 5.843121218s to wait for apiserver process to appear ...
	I1216 03:08:06.604800  327289 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:08:06.604827  327289 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 03:08:06.609304  327289 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 03:08:06.610288  327289 api_server.go:141] control plane version: v1.34.2
	I1216 03:08:06.610311  327289 api_server.go:131] duration metric: took 5.505683ms to wait for apiserver health ...
	I1216 03:08:06.610319  327289 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:08:06.614029  327289 system_pods.go:59] 7 kube-system pods found
	I1216 03:08:06.614069  327289 system_pods.go:61] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.614080  327289 system_pods.go:61] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.614089  327289 system_pods.go:61] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.614101  327289 system_pods.go:61] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.614111  327289 system_pods.go:61] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.614116  327289 system_pods.go:61] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.614126  327289 system_pods.go:61] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.614133  327289 system_pods.go:74] duration metric: took 3.807528ms to wait for pod list to return data ...
	I1216 03:08:06.614150  327289 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:08:06.616562  327289 default_sa.go:45] found service account: "default"
	I1216 03:08:06.616581  327289 default_sa.go:55] duration metric: took 2.421799ms for default service account to be created ...
	I1216 03:08:06.616590  327289 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:08:06.619513  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:06.619544  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.619553  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.619575  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.619583  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.619589  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.619595  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.619623  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.619650  327289 retry.go:31] will retry after 188.571069ms: missing components: kube-dns
	I1216 03:08:06.812790  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:06.812853  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:06.812860  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:06.812866  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:06.812872  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:06.812877  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:06.812882  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:06.812889  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:06.812906  327289 retry.go:31] will retry after 269.474978ms: missing components: kube-dns
	I1216 03:08:07.086721  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.086753  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.086761  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:07.086767  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.086771  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.086775  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.086778  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.086783  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:07.086796  327289 retry.go:31] will retry after 345.183644ms: missing components: kube-dns
	I1216 03:08:07.436048  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.436085  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.436093  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:07.436102  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.436109  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.436114  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.436120  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.436133  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:07.436150  327289 retry.go:31] will retry after 402.382971ms: missing components: kube-dns
	I1216 03:08:07.843560  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:07.843589  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:07.843595  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:07.843607  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:07.843614  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:07.843619  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:07.843625  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:07.843630  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:07.843647  327289 retry.go:31] will retry after 495.107547ms: missing components: kube-dns
	I1216 03:08:08.342503  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:08.342538  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:08.342543  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:08.342550  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:08.342554  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:08.342558  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:08.342561  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:08.342564  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:08.342578  327289 retry.go:31] will retry after 764.298983ms: missing components: kube-dns
	I1216 03:08:09.111900  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:09.111930  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:09.111936  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:09.111943  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:09.111947  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:09.111952  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:09.111955  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:09.111959  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:09.111974  327289 retry.go:31] will retry after 870.947057ms: missing components: kube-dns
	I1216 03:08:09.987279  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:09.987313  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:09.987318  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:09.987324  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:09.987332  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:09.987336  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:09.987339  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:09.987342  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:09.987356  327289 retry.go:31] will retry after 1.127635162s: missing components: kube-dns
	W1216 03:08:07.955704  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	W1216 03:08:09.955960  324480 pod_ready.go:104] pod "coredns-66bc5c9577-rz62v" is not "Ready", error: <nil>
	I1216 03:08:08.885244  336341 main.go:143] libmachine: Using SSH client type: native
	I1216 03:08:08.885481  336341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1216 03:08:08.885499  336341 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:08:09.176304  336341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:08:09.176340  336341 machine.go:97] duration metric: took 4.10508772s to provisionDockerMachine
	I1216 03:08:09.176352  336341 client.go:176] duration metric: took 10.117495893s to LocalClient.Create
	I1216 03:08:09.176372  336341 start.go:167] duration metric: took 10.117588215s to libmachine.API.Create "enable-default-cni-646016"
	I1216 03:08:09.176381  336341 start.go:293] postStartSetup for "enable-default-cni-646016" (driver="docker")
	I1216 03:08:09.176397  336341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:08:09.176485  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:08:09.176543  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.196952  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.298623  336341 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:08:09.302002  336341 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:08:09.302034  336341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:08:09.302046  336341 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/addons for local assets ...
	I1216 03:08:09.302113  336341 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5058/.minikube/files for local assets ...
	I1216 03:08:09.302207  336341 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem -> 85862.pem in /etc/ssl/certs
	I1216 03:08:09.302306  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:08:09.309995  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:08:09.330598  336341 start.go:296] duration metric: took 154.200303ms for postStartSetup
	I1216 03:08:09.331015  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:09.348953  336341 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/config.json ...
	I1216 03:08:09.349224  336341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:08:09.349283  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.368047  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.463918  336341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:08:09.468547  336341 start.go:128] duration metric: took 10.412255965s to createHost
	I1216 03:08:09.468574  336341 start.go:83] releasing machines lock for "enable-default-cni-646016", held for 10.412407201s
	I1216 03:08:09.468629  336341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-646016
	I1216 03:08:09.489776  336341 ssh_runner.go:195] Run: cat /version.json
	I1216 03:08:09.489844  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.489872  336341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:08:09.489950  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:09.508436  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.508998  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:09.665209  336341 ssh_runner.go:195] Run: systemctl --version
	I1216 03:08:09.671669  336341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:08:09.706599  336341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:08:09.711670  336341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:08:09.711722  336341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:08:09.739312  336341 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:08:09.739337  336341 start.go:496] detecting cgroup driver to use...
	I1216 03:08:09.739373  336341 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 03:08:09.739423  336341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:08:09.755455  336341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:08:09.768005  336341 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:08:09.768055  336341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:08:09.785613  336341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:08:09.802861  336341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:08:09.887759  336341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:08:09.974398  336341 docker.go:234] disabling docker service ...
	I1216 03:08:09.974466  336341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:08:09.994022  336341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:08:10.006915  336341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:08:10.094973  336341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:08:10.178913  336341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:08:10.192767  336341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:08:10.207669  336341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:08:10.207722  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.218263  336341 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 03:08:10.218331  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.227302  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.236185  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.244927  336341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:08:10.252937  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.261710  336341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.275363  336341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:08:10.284517  336341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:08:10.291984  336341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:08:10.299171  336341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:10.378486  336341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:08:10.843377  336341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:08:10.843442  336341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:08:10.847520  336341 start.go:564] Will wait 60s for crictl version
	I1216 03:08:10.847570  336341 ssh_runner.go:195] Run: which crictl
	I1216 03:08:10.851360  336341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:08:10.874737  336341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 03:08:10.874804  336341 ssh_runner.go:195] Run: crio --version
	I1216 03:08:10.903749  336341 ssh_runner.go:195] Run: crio --version
	I1216 03:08:10.935080  336341 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 03:08:10.958125  324480 pod_ready.go:94] pod "coredns-66bc5c9577-rz62v" is "Ready"
	I1216 03:08:10.958169  324480 pod_ready.go:86] duration metric: took 32.008242289s for pod "coredns-66bc5c9577-rz62v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.960757  324480 pod_ready.go:83] waiting for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.965072  324480 pod_ready.go:94] pod "etcd-embed-certs-742794" is "Ready"
	I1216 03:08:10.965089  324480 pod_ready.go:86] duration metric: took 4.308418ms for pod "etcd-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.967233  324480 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.971011  324480 pod_ready.go:94] pod "kube-apiserver-embed-certs-742794" is "Ready"
	I1216 03:08:10.971030  324480 pod_ready.go:86] duration metric: took 3.781503ms for pod "kube-apiserver-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:10.972914  324480 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.154723  324480 pod_ready.go:94] pod "kube-controller-manager-embed-certs-742794" is "Ready"
	I1216 03:08:11.154754  324480 pod_ready.go:86] duration metric: took 181.818907ms for pod "kube-controller-manager-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.354049  324480 pod_ready.go:83] waiting for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.754568  324480 pod_ready.go:94] pod "kube-proxy-899tv" is "Ready"
	I1216 03:08:11.754598  324480 pod_ready.go:86] duration metric: took 400.525561ms for pod "kube-proxy-899tv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:11.954663  324480 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:12.354668  324480 pod_ready.go:94] pod "kube-scheduler-embed-certs-742794" is "Ready"
	I1216 03:08:12.354696  324480 pod_ready.go:86] duration metric: took 400.010965ms for pod "kube-scheduler-embed-certs-742794" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:12.354711  324480 pod_ready.go:40] duration metric: took 33.408689402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:12.412771  324480 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:12.415012  324480 out.go:179] * Done! kubectl is now configured to use "embed-certs-742794" cluster and "default" namespace by default
	I1216 03:08:10.936405  336341 cli_runner.go:164] Run: docker network inspect enable-default-cni-646016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:08:10.954836  336341 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:08:10.959680  336341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:08:10.971857  336341 kubeadm.go:884] updating cluster {Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:08:10.972006  336341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:08:10.972081  336341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:08:11.004114  336341 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:08:11.004135  336341 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:08:11.004181  336341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:08:11.029519  336341 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:08:11.029539  336341 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:08:11.029545  336341 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 03:08:11.029628  336341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-646016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1216 03:08:11.029698  336341 ssh_runner.go:195] Run: crio config
	I1216 03:08:11.074796  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:08:11.074837  336341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:08:11.074868  336341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-646016 NodeName:enable-default-cni-646016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:08:11.075006  336341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-646016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:08:11.075072  336341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:08:11.083266  336341 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:08:11.083327  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:08:11.091248  336341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 03:08:11.103827  336341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:08:11.119310  336341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 03:08:11.132947  336341 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:08:11.136635  336341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:08:11.146487  336341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:11.231049  336341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:08:11.259536  336341 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016 for IP: 192.168.76.2
	I1216 03:08:11.259558  336341 certs.go:195] generating shared ca certs ...
	I1216 03:08:11.259577  336341 certs.go:227] acquiring lock for ca certs: {Name:mkbf63c9e1af88ec0cf6dfa2473bc5e04ab77181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.259768  336341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key
	I1216 03:08:11.259856  336341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key
	I1216 03:08:11.259872  336341 certs.go:257] generating profile certs ...
	I1216 03:08:11.259945  336341 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key
	I1216 03:08:11.259968  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt with IP's: []
	I1216 03:08:11.493687  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt ...
	I1216 03:08:11.493712  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.crt: {Name:mk36be957c3f6e4e308d0508e4b59467834da1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.493932  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key ...
	I1216 03:08:11.493951  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/client.key: {Name:mk7444d2f7692f28304ae915f9b55e9c99798a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.494069  336341 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64
	I1216 03:08:11.494086  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 03:08:11.724186  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 ...
	I1216 03:08:11.724213  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64: {Name:mkc2aac88a53da8bf33d1e25029c82dc6fc0e58d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.724400  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64 ...
	I1216 03:08:11.724419  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64: {Name:mk9a85bf29933f41d39d06fba90d821fb048dd68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.724522  336341 certs.go:382] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt.8780ff64 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt
	I1216 03:08:11.724631  336341 certs.go:386] copying /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key.8780ff64 -> /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key
	I1216 03:08:11.724718  336341 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key
	I1216 03:08:11.724740  336341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt with IP's: []
	I1216 03:08:11.789665  336341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt ...
	I1216 03:08:11.789693  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt: {Name:mk0e84e0c50b14eb4bec375c93d4765472f16aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.789897  336341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key ...
	I1216 03:08:11.789918  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key: {Name:mk95393ee9755a4c21c03c2b4a0362d3fd9f2978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:11.790146  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem (1338 bytes)
	W1216 03:08:11.790203  336341 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586_empty.pem, impossibly tiny 0 bytes
	I1216 03:08:11.790219  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:08:11.790263  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:08:11.790308  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:08:11.790345  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/certs/key.pem (1679 bytes)
	I1216 03:08:11.790404  336341 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem (1708 bytes)
	I1216 03:08:11.791078  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:08:11.810513  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:08:11.828808  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:08:11.847636  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:08:11.865772  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 03:08:11.883895  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:08:11.901142  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:08:11.918507  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/enable-default-cni-646016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:08:11.935882  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:08:11.955785  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/certs/8586.pem --> /usr/share/ca-certificates/8586.pem (1338 bytes)
	I1216 03:08:11.973053  336341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/ssl/certs/85862.pem --> /usr/share/ca-certificates/85862.pem (1708 bytes)
	I1216 03:08:11.990707  336341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:08:12.003403  336341 ssh_runner.go:195] Run: openssl version
	I1216 03:08:12.009393  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.016909  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8586.pem /etc/ssl/certs/8586.pem
	I1216 03:08:12.024412  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.028129  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:33 /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.028171  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8586.pem
	I1216 03:08:12.063186  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:08:12.071109  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8586.pem /etc/ssl/certs/51391683.0
	I1216 03:08:12.078722  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.086405  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85862.pem /etc/ssl/certs/85862.pem
	I1216 03:08:12.094612  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.098672  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:33 /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.098741  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85862.pem
	I1216 03:08:12.137744  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:08:12.145534  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85862.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:08:12.153311  336341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.161455  336341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:08:12.168746  336341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.172545  336341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.172596  336341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:08:12.210569  336341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:08:12.219185  336341 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:08:12.227595  336341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:08:12.231493  336341 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:08:12.231568  336341 kubeadm.go:401] StartCluster: {Name:enable-default-cni-646016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-646016 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:08:12.231662  336341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:08:12.231737  336341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:08:12.260343  336341 cri.go:89] found id: ""
	I1216 03:08:12.260407  336341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:08:12.268991  336341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:08:12.277890  336341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:08:12.277967  336341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:08:12.287125  336341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:08:12.287146  336341 kubeadm.go:158] found existing configuration files:
	
	I1216 03:08:12.287211  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:08:12.295341  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:08:12.295409  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:08:12.302841  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:08:12.311074  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:08:12.311143  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:08:12.318508  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:08:12.327776  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:08:12.327859  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:08:12.335771  336341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:08:12.344146  336341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:08:12.344202  336341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:08:12.352703  336341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:08:12.405135  336341 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:08:12.405218  336341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:08:12.431274  336341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:08:12.431370  336341 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 03:08:12.431440  336341 kubeadm.go:319] OS: Linux
	I1216 03:08:12.431500  336341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:08:12.431564  336341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:08:12.431667  336341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:08:12.431745  336341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:08:12.431891  336341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:08:12.431976  336341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:08:12.432056  336341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:08:12.432138  336341 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 03:08:12.509084  336341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:08:12.509223  336341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:08:12.509358  336341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:08:12.517074  336341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:08:12.519773  336341 out.go:252]   - Generating certificates and keys ...
	I1216 03:08:12.519891  336341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:08:12.520018  336341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:08:12.725300  336341 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:08:12.969403  336341 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:08:13.375727  336341 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:08:13.601180  336341 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:08:11.121289  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:11.121326  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:11.121335  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:11.121345  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:11.121352  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:11.121360  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:11.121365  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:11.121373  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:11.121388  327289 retry.go:31] will retry after 1.233440062s: missing components: kube-dns
	I1216 03:08:12.358695  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:12.358741  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:12.358752  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:12.358768  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:12.358777  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:12.358784  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:12.358790  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:12.358797  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:12.358810  327289 retry.go:31] will retry after 1.822030559s: missing components: kube-dns
	I1216 03:08:14.185955  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:14.186001  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:14.186010  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:14.186017  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:14.186026  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:14.186037  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:14.186043  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:14.186049  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:14.186070  327289 retry.go:31] will retry after 2.807521371s: missing components: kube-dns
	I1216 03:08:13.940262  336341 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:08:13.940475  336341 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:08:14.024195  336341 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:08:14.024397  336341 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-646016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 03:08:14.623212  336341 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:08:14.895703  336341 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:08:14.994613  336341 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:08:14.994689  336341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:08:15.087054  336341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:08:15.258741  336341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:08:15.593339  336341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:08:15.949425  336341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:08:16.302134  336341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:08:16.302611  336341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:08:16.306362  336341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:08:16.307899  336341 out.go:252]   - Booting up control plane ...
	I1216 03:08:16.307994  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:08:16.308104  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:08:16.308614  336341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:08:16.322329  336341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:08:16.322461  336341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:08:16.329904  336341 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:08:16.330319  336341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:08:16.330386  336341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:08:16.435554  336341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:08:16.435735  336341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:17.436293  336341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000897774s
	I1216 03:08:17.439555  336341 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:08:17.439722  336341 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 03:08:17.439886  336341 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:08:17.440002  336341 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:08:16.997472  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:16.997508  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:16.997514  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:16.997520  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:16.997524  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:16.997528  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:16.997531  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:16.997534  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:16.997547  327289 retry.go:31] will retry after 2.719576061s: missing components: kube-dns
	I1216 03:08:19.724380  327289 system_pods.go:86] 7 kube-system pods found
	I1216 03:08:19.724417  327289 system_pods.go:89] "coredns-66bc5c9577-5jz9m" [2638fe43-6473-46a1-9919-c7f574cf51fe] Running
	I1216 03:08:19.724426  327289 system_pods.go:89] "etcd-custom-flannel-646016" [e955fa50-62e6-46d5-8fc8-cb656d206582] Running
	I1216 03:08:19.724433  327289 system_pods.go:89] "kube-apiserver-custom-flannel-646016" [3de753fc-e0f4-4ea3-8639-a23d8d732421] Running
	I1216 03:08:19.724438  327289 system_pods.go:89] "kube-controller-manager-custom-flannel-646016" [134f8233-86ee-4f86-951e-84cfbf67d3e2] Running
	I1216 03:08:19.724443  327289 system_pods.go:89] "kube-proxy-6wswf" [04dbbefd-3d38-445b-84a6-83a73c2d13cf] Running
	I1216 03:08:19.724448  327289 system_pods.go:89] "kube-scheduler-custom-flannel-646016" [677ae404-5fbb-4312-9eba-df0f6bf2919f] Running
	I1216 03:08:19.724453  327289 system_pods.go:89] "storage-provisioner" [07d89fed-1c00-4ce4-aa95-78189f549c3c] Running
	I1216 03:08:19.724462  327289 system_pods.go:126] duration metric: took 13.107865367s to wait for k8s-apps to be running ...
	I1216 03:08:19.724471  327289 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:08:19.724523  327289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:08:19.743141  327289 system_svc.go:56] duration metric: took 18.659974ms WaitForService to wait for kubelet
	I1216 03:08:19.743173  327289 kubeadm.go:587] duration metric: took 18.981523868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:08:19.743193  327289 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:08:19.746714  327289 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 03:08:19.746753  327289 node_conditions.go:123] node cpu capacity is 8
	I1216 03:08:19.746777  327289 node_conditions.go:105] duration metric: took 3.578184ms to run NodePressure ...
	I1216 03:08:19.746793  327289 start.go:242] waiting for startup goroutines ...
	I1216 03:08:19.746809  327289 start.go:247] waiting for cluster config update ...
	I1216 03:08:19.746838  327289 start.go:256] writing updated cluster config ...
	I1216 03:08:19.747161  327289 ssh_runner.go:195] Run: rm -f paused
	I1216 03:08:19.751784  327289 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:19.756263  327289 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5jz9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.761149  327289 pod_ready.go:94] pod "coredns-66bc5c9577-5jz9m" is "Ready"
	I1216 03:08:19.761174  327289 pod_ready.go:86] duration metric: took 4.886245ms for pod "coredns-66bc5c9577-5jz9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.763455  327289 pod_ready.go:83] waiting for pod "etcd-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.767847  327289 pod_ready.go:94] pod "etcd-custom-flannel-646016" is "Ready"
	I1216 03:08:19.767871  327289 pod_ready.go:86] duration metric: took 4.393657ms for pod "etcd-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.769899  327289 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.774343  327289 pod_ready.go:94] pod "kube-apiserver-custom-flannel-646016" is "Ready"
	I1216 03:08:19.774365  327289 pod_ready.go:86] duration metric: took 4.448057ms for pod "kube-apiserver-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:19.776453  327289 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.156759  327289 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-646016" is "Ready"
	I1216 03:08:20.156792  327289 pod_ready.go:86] duration metric: took 380.316816ms for pod "kube-controller-manager-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.356883  327289 pod_ready.go:83] waiting for pod "kube-proxy-6wswf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.756577  327289 pod_ready.go:94] pod "kube-proxy-6wswf" is "Ready"
	I1216 03:08:20.756610  327289 pod_ready.go:86] duration metric: took 399.701773ms for pod "kube-proxy-6wswf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:20.957789  327289 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:21.356423  327289 pod_ready.go:94] pod "kube-scheduler-custom-flannel-646016" is "Ready"
	I1216 03:08:21.356519  327289 pod_ready.go:86] duration metric: took 398.643487ms for pod "kube-scheduler-custom-flannel-646016" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:08:21.356550  327289 pod_ready.go:40] duration metric: took 1.604733623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:08:21.405898  327289 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:08:21.407757  327289 out.go:179] * Done! kubectl is now configured to use "custom-flannel-646016" cluster and "default" namespace by default
	I1216 03:08:19.136994  336341 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.697392724s
	I1216 03:08:19.664959  336341 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.22534115s
	I1216 03:08:21.441753  336341 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002188144s
	I1216 03:08:21.460160  336341 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:08:21.470163  336341 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:08:21.480737  336341 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:08:21.481078  336341 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-646016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:08:21.489303  336341 kubeadm.go:319] [bootstrap-token] Using token: 4e611j.sp5pjcqpogdnm1bn
	I1216 03:08:21.490576  336341 out.go:252]   - Configuring RBAC rules ...
	I1216 03:08:21.490726  336341 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:08:21.494174  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:08:21.500362  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:08:21.502894  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:08:21.505166  336341 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:08:21.507914  336341 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:08:21.851029  336341 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:08:22.260856  336341 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:08:22.848457  336341 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:08:22.849671  336341 kubeadm.go:319] 
	I1216 03:08:22.849766  336341 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:08:22.849779  336341 kubeadm.go:319] 
	I1216 03:08:22.849913  336341 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:08:22.849925  336341 kubeadm.go:319] 
	I1216 03:08:22.849954  336341 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:08:22.850035  336341 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:08:22.850104  336341 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:08:22.850110  336341 kubeadm.go:319] 
	I1216 03:08:22.850182  336341 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:08:22.850187  336341 kubeadm.go:319] 
	I1216 03:08:22.850255  336341 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:08:22.850261  336341 kubeadm.go:319] 
	I1216 03:08:22.850330  336341 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:08:22.850430  336341 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:08:22.850522  336341 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:08:22.850527  336341 kubeadm.go:319] 
	I1216 03:08:22.850637  336341 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:08:22.850747  336341 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:08:22.850754  336341 kubeadm.go:319] 
	I1216 03:08:22.850873  336341 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4e611j.sp5pjcqpogdnm1bn \
	I1216 03:08:22.851009  336341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca \
	I1216 03:08:22.851038  336341 kubeadm.go:319] 	--control-plane 
	I1216 03:08:22.851043  336341 kubeadm.go:319] 
	I1216 03:08:22.851146  336341 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:08:22.851150  336341 kubeadm.go:319] 
	I1216 03:08:22.851230  336341 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4e611j.sp5pjcqpogdnm1bn \
	I1216 03:08:22.851359  336341 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e38e2c8c8fec76213f7b757530dcaf9e877db6d8250552a61f194b6b5c7ed9ca 
	I1216 03:08:22.855155  336341 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 03:08:22.855298  336341 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:22.855486  336341 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:08:22.857898  336341 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:08:22.859410  336341 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:08:22.870592  336341 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:08:22.888623  336341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:08:22.888779  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:22.888873  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-646016 minikube.k8s.io/updated_at=2025_12_16T03_08_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=enable-default-cni-646016 minikube.k8s.io/primary=true
	I1216 03:08:22.991871  336341 ops.go:34] apiserver oom_adj: -16
	I1216 03:08:22.992025  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:23.492377  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:23.992562  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:24.492147  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:24.992628  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:25.492722  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:25.992296  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:26.492614  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:26.992925  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:27.492520  336341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:08:27.655440  336341 kubeadm.go:1114] duration metric: took 4.76672361s to wait for elevateKubeSystemPrivileges
	I1216 03:08:27.655480  336341 kubeadm.go:403] duration metric: took 15.423918814s to StartCluster
	I1216 03:08:27.655502  336341 settings.go:142] acquiring lock: {Name:mk82381256985b4d9476bdcd7210ee60075d1677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:27.655582  336341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:08:27.658274  336341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5058/kubeconfig: {Name:mk3606e4fcf38fbba3a6b91646d4482657bcb57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:08:27.658621  336341 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:08:27.658758  336341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:08:27.658769  336341 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:08:27.658891  336341 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-646016"
	I1216 03:08:27.658914  336341 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-646016"
	I1216 03:08:27.658949  336341 host.go:66] Checking if "enable-default-cni-646016" exists ...
	I1216 03:08:27.658939  336341 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-646016"
	I1216 03:08:27.659042  336341 config.go:182] Loaded profile config "enable-default-cni-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:08:27.659053  336341 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-646016"
	I1216 03:08:27.659429  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:27.659491  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:27.663275  336341 out.go:179] * Verifying Kubernetes components...
	I1216 03:08:27.664913  336341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:08:27.686992  336341 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:08:27.688966  336341 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:27.688997  336341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:08:27.689056  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:27.689478  336341 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-646016"
	I1216 03:08:27.689529  336341 host.go:66] Checking if "enable-default-cni-646016" exists ...
	I1216 03:08:27.690125  336341 cli_runner.go:164] Run: docker container inspect enable-default-cni-646016 --format={{.State.Status}}
	I1216 03:08:27.726047  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:27.728027  336341 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:27.728050  336341 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:08:27.728372  336341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-646016
	I1216 03:08:27.759321  336341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/enable-default-cni-646016/id_rsa Username:docker}
	I1216 03:08:27.773134  336341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:08:27.837302  336341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:08:27.868601  336341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:08:27.893124  336341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:08:28.036625  336341 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 03:08:28.038461  336341 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-646016" to be "Ready" ...
	I1216 03:08:28.051516  336341 node_ready.go:49] node "enable-default-cni-646016" is "Ready"
	I1216 03:08:28.051545  336341 node_ready.go:38] duration metric: took 13.03697ms for node "enable-default-cni-646016" to be "Ready" ...
	I1216 03:08:28.051560  336341 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:08:28.051609  336341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:08:28.283391  336341 api_server.go:72] duration metric: took 624.733101ms to wait for apiserver process to appear ...
	I1216 03:08:28.283419  336341 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:08:28.283450  336341 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 03:08:28.289623  336341 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 03:08:28.290954  336341 api_server.go:141] control plane version: v1.34.2
	I1216 03:08:28.290983  336341 api_server.go:131] duration metric: took 7.556893ms to wait for apiserver health ...
	I1216 03:08:28.290995  336341 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:08:28.293098  336341 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:08:28.294210  336341 addons.go:530] duration metric: took 635.42369ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:08:28.295931  336341 system_pods.go:59] 8 kube-system pods found
	I1216 03:08:28.295963  336341 system_pods.go:61] "coredns-66bc5c9577-67cfq" [7a2ee8f9-53c0-483f-9786-c76c8e513714] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.295981  336341 system_pods.go:61] "coredns-66bc5c9577-f8b69" [d0bd02d8-d5b3-4d44-acb6-11583f5be4bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.295997  336341 system_pods.go:61] "etcd-enable-default-cni-646016" [30dd6d33-98c4-43c5-ae6e-fa315b7c372c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:28.296011  336341 system_pods.go:61] "kube-apiserver-enable-default-cni-646016" [cd3669ac-d17f-4f74-a414-6513abbe1075] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:08:28.296022  336341 system_pods.go:61] "kube-controller-manager-enable-default-cni-646016" [68d2955d-56f5-42d3-9a31-1ddb04a5f628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:08:28.296033  336341 system_pods.go:61] "kube-proxy-q6qnn" [fd3d181b-8587-4915-904a-098acd3baeee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:08:28.296055  336341 system_pods.go:61] "kube-scheduler-enable-default-cni-646016" [865acd0b-ea9c-4f63-befa-659160b5c8b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:08:28.296067  336341 system_pods.go:61] "storage-provisioner" [3385c820-40bd-46ae-beae-3159829cae83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:28.296075  336341 system_pods.go:74] duration metric: took 5.07388ms to wait for pod list to return data ...
	I1216 03:08:28.296086  336341 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:08:28.298579  336341 default_sa.go:45] found service account: "default"
	I1216 03:08:28.298599  336341 default_sa.go:55] duration metric: took 2.503857ms for default service account to be created ...
	I1216 03:08:28.298608  336341 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:08:28.395584  336341 system_pods.go:86] 8 kube-system pods found
	I1216 03:08:28.395623  336341 system_pods.go:89] "coredns-66bc5c9577-67cfq" [7a2ee8f9-53c0-483f-9786-c76c8e513714] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.395636  336341 system_pods.go:89] "coredns-66bc5c9577-f8b69" [d0bd02d8-d5b3-4d44-acb6-11583f5be4bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.395646  336341 system_pods.go:89] "etcd-enable-default-cni-646016" [30dd6d33-98c4-43c5-ae6e-fa315b7c372c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:28.395658  336341 system_pods.go:89] "kube-apiserver-enable-default-cni-646016" [cd3669ac-d17f-4f74-a414-6513abbe1075] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:08:28.395667  336341 system_pods.go:89] "kube-controller-manager-enable-default-cni-646016" [68d2955d-56f5-42d3-9a31-1ddb04a5f628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:08:28.395674  336341 system_pods.go:89] "kube-proxy-q6qnn" [fd3d181b-8587-4915-904a-098acd3baeee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:08:28.395685  336341 system_pods.go:89] "kube-scheduler-enable-default-cni-646016" [865acd0b-ea9c-4f63-befa-659160b5c8b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:08:28.395691  336341 system_pods.go:89] "storage-provisioner" [3385c820-40bd-46ae-beae-3159829cae83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:28.395746  336341 retry.go:31] will retry after 212.366484ms: missing components: kube-dns, kube-proxy
	I1216 03:08:28.544030  336341 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-646016" context rescaled to 1 replicas
	I1216 03:08:28.612869  336341 system_pods.go:86] 8 kube-system pods found
	I1216 03:08:28.612908  336341 system_pods.go:89] "coredns-66bc5c9577-67cfq" [7a2ee8f9-53c0-483f-9786-c76c8e513714] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.613526  336341 system_pods.go:89] "coredns-66bc5c9577-f8b69" [d0bd02d8-d5b3-4d44-acb6-11583f5be4bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:08:28.613560  336341 system_pods.go:89] "etcd-enable-default-cni-646016" [30dd6d33-98c4-43c5-ae6e-fa315b7c372c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:08:28.613570  336341 system_pods.go:89] "kube-apiserver-enable-default-cni-646016" [cd3669ac-d17f-4f74-a414-6513abbe1075] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:08:28.613580  336341 system_pods.go:89] "kube-controller-manager-enable-default-cni-646016" [68d2955d-56f5-42d3-9a31-1ddb04a5f628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:08:28.613588  336341 system_pods.go:89] "kube-proxy-q6qnn" [fd3d181b-8587-4915-904a-098acd3baeee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:08:28.613596  336341 system_pods.go:89] "kube-scheduler-enable-default-cni-646016" [865acd0b-ea9c-4f63-befa-659160b5c8b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:08:28.613604  336341 system_pods.go:89] "storage-provisioner" [3385c820-40bd-46ae-beae-3159829cae83] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:08:28.613623  336341 retry.go:31] will retry after 295.895043ms: missing components: kube-dns, kube-proxy
	
	
	==> CRI-O <==
	Dec 16 03:08:00 embed-certs-742794 crio[564]: time="2025-12-16T03:08:00.924287624Z" level=info msg="Started container" PID=1761 containerID=9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper id=87ab52d7-efd7-4241-9d12-18e2b050d7b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=afcd90dfd8d85707de5dda050fb2190b08d57502c1d6e7eba8fef4d985c781eb
	Dec 16 03:08:01 embed-certs-742794 crio[564]: time="2025-12-16T03:08:01.022216278Z" level=info msg="Removing container: 1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93" id=2f79ad87-2596-44c5-9ad4-763ac97ebe93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:01 embed-certs-742794 crio[564]: time="2025-12-16T03:08:01.036351631Z" level=info msg="Removed container 1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=2f79ad87-2596-44c5-9ad4-763ac97ebe93 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.038611565Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb616de5-c9d9-4834-84d5-4fe72844629f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.039601853Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4427f666-a22c-4ae4-935d-00b6d1117880 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.040918119Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5376f624-1030-4d51-8206-1cc71ee5517e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.041064961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045668974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045885966Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bd25dbb6165f48a14ffeabe995bc2b5e86b95fb4d6b4cb4bd8e3a7feaebccabf/merged/etc/passwd: no such file or directory"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.045920576Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bd25dbb6165f48a14ffeabe995bc2b5e86b95fb4d6b4cb4bd8e3a7feaebccabf/merged/etc/group: no such file or directory"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.046219339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.0703352Z" level=info msg="Created container d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68: kube-system/storage-provisioner/storage-provisioner" id=5376f624-1030-4d51-8206-1cc71ee5517e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.071115078Z" level=info msg="Starting container: d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68" id=90deab09-4164-4471-9d60-df6f4b654ed8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:08:09 embed-certs-742794 crio[564]: time="2025-12-16T03:08:09.072910758Z" level=info msg="Started container" PID=1775 containerID=d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68 description=kube-system/storage-provisioner/storage-provisioner id=90deab09-4164-4471-9d60-df6f4b654ed8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dd5ba515886d0fd16fab845092d737393a27e263a9fdc75b7513c6eb1890474
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.872508173Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a144c1d1-a479-4b6f-8cb2-1f8ceea8f873 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.873431808Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78ab26bf-df26-43fe-a945-314e8d94ed3d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.874503162Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=3b329f92-5e4c-4cc4-86ec-4f625e4d631a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.874636954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.880603022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.881101461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.913116424Z" level=info msg="Created container 0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=3b329f92-5e4c-4cc4-86ec-4f625e4d631a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.913898323Z" level=info msg="Starting container: 0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6" id=680e1089-df14-4062-adfb-9fd0ed89c854 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 03:08:23 embed-certs-742794 crio[564]: time="2025-12-16T03:08:23.91610449Z" level=info msg="Started container" PID=1811 containerID=0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper id=680e1089-df14-4062-adfb-9fd0ed89c854 name=/runtime.v1.RuntimeService/StartContainer sandboxID=afcd90dfd8d85707de5dda050fb2190b08d57502c1d6e7eba8fef4d985c781eb
	Dec 16 03:08:24 embed-certs-742794 crio[564]: time="2025-12-16T03:08:24.083983047Z" level=info msg="Removing container: 9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f" id=e72937be-d42c-4871-bf7f-3dbb69394f99 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 03:08:24 embed-certs-742794 crio[564]: time="2025-12-16T03:08:24.09734664Z" level=info msg="Removed container 9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6/dashboard-metrics-scraper" id=e72937be-d42c-4871-bf7f-3dbb69394f99 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0916cce970194       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   afcd90dfd8d85       dashboard-metrics-scraper-6ffb444bf9-g2wm6   kubernetes-dashboard
	d82d7118ed087       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   0dd5ba515886d       storage-provisioner                          kube-system
	424c3093fc615       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   b63f0ed462484       kubernetes-dashboard-855c9754f9-4srjf        kubernetes-dashboard
	42861ed8183ec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   9e557055aa88a       coredns-66bc5c9577-rz62v                     kube-system
	cfa66554ca3d3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   354833035f347       busybox                                      default
	9eec54ef0eb86       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   36f4ca3077f31       kindnet-7vrj8                                kube-system
	ab93683ff228d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   6841ed9e4bc94       kube-proxy-899tv                             kube-system
	7ec84b1f0e67e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   0dd5ba515886d       storage-provisioner                          kube-system
	cf6f05491bb98       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   01847433d40e7       kube-scheduler-embed-certs-742794            kube-system
	a181636c6acb9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   63d651ca988a6       kube-apiserver-embed-certs-742794            kube-system
	667d4cacc5909       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   fc165b9de898a       etcd-embed-certs-742794                      kube-system
	81e653c21515d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   63f2a0774f6cf       kube-controller-manager-embed-certs-742794   kube-system
	
	
	==> coredns [42861ed8183ec9b607073cc1143c737d3eff40777a75bb80cb7974e97a232559] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39399 - 36200 "HINFO IN 3177294149401379934.7077990097688895753. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014618639s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-742794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-742794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=embed-certs-742794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:06:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-742794
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:08:08 +0000   Tue, 16 Dec 2025 03:06:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-742794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe49017aad4d090675962827693c8b15
	  System UUID:                227aaafb-25e6-44ee-81ce-b7feaed19af9
	  Boot ID:                    c9f5d90f-89fe-44f4-95d7-04417ff0501d
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-rz62v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-742794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-7vrj8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-742794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-742794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-899tv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-742794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g2wm6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4srjf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node embed-certs-742794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-742794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node embed-certs-742794 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-742794 event: Registered Node embed-certs-742794 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-742794 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-742794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-742794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-742794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-742794 event: Registered Node embed-certs-742794 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.042098] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023858] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +2.047763] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 02:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[  +8.319111] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 5e b3 f6 de b7 0a 28 79 6f a3 81 08 00
	[Dec16 03:08] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ad 51 11 42 71 08 06
	[  +0.107389] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8e a8 f7 38 72 7c 08 06
	
	
	==> etcd [667d4cacc59090493c14b00dca21c677045a2a6fb1054fcb25d012a6e29094bf] <==
	{"level":"warn","ts":"2025-12-16T03:07:36.701771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.721730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.721717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.733404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.741007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.747888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.762297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.778423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.786738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.802740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.814603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.822929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.830389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.836739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.843372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.849856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.856219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.863971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.876897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.883411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.890196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:07:36.935118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56804","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:07:57.923697Z","caller":"traceutil/trace.go:172","msg":"trace[606713941] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"111.438234ms","start":"2025-12-16T03:07:57.812240Z","end":"2025-12-16T03:07:57.923679Z","steps":["trace[606713941] 'process raft request'  (duration: 111.101859ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:08:10.667751Z","caller":"traceutil/trace.go:172","msg":"trace[729046098] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"121.35559ms","start":"2025-12-16T03:08:10.546365Z","end":"2025-12-16T03:08:10.667720Z","steps":["trace[729046098] 'process raft request'  (duration: 121.320874ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T03:08:10.667791Z","caller":"traceutil/trace.go:172","msg":"trace[761896218] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"122.407605ms","start":"2025-12-16T03:08:10.545358Z","end":"2025-12-16T03:08:10.667766Z","steps":["trace[761896218] 'process raft request'  (duration: 122.227214ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:08:30 up 51 min,  0 user,  load average: 4.73, 3.80, 2.41
	Linux embed-certs-742794 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9eec54ef0eb86273caa75b15f014b05844823ed2fcbbe238e3b384a5d99b6639] <==
	I1216 03:07:38.481269       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 03:07:38.481534       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 03:07:38.481694       1 main.go:148] setting mtu 1500 for CNI 
	I1216 03:07:38.481716       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 03:07:38.481741       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T03:07:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 03:07:38.879865       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 03:07:38.880000       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 03:07:38.880026       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 03:07:38.880226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 03:07:39.180265       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 03:07:39.180305       1 metrics.go:72] Registering metrics
	I1216 03:07:39.180386       1 controller.go:711] "Syncing nftables rules"
	I1216 03:07:48.683713       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:07:48.683949       1 main.go:301] handling current node
	I1216 03:07:58.686998       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:07:58.687049       1 main.go:301] handling current node
	I1216 03:08:08.684092       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:08:08.684154       1 main.go:301] handling current node
	I1216 03:08:18.689929       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:08:18.689968       1 main.go:301] handling current node
	I1216 03:08:28.686963       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 03:08:28.687012       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a181636c6acb97bb608ea7a6cee423c766f5c5b809c9f71463703439007e8b17] <==
	I1216 03:07:37.420546       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:07:37.420371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:07:37.420589       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:07:37.420600       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:07:37.420610       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:07:37.420616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:07:37.420622       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:07:37.420786       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 03:07:37.420786       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:07:37.420472       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:07:37.420992       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:07:37.427785       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:07:37.442838       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 03:07:37.445641       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:07:37.661992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 03:07:37.689316       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:07:37.709234       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:07:37.716309       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:07:37.723712       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:07:37.757362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.207.243"}
	I1216 03:07:37.769025       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.150.218"}
	I1216 03:07:38.327423       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:07:41.102942       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 03:07:41.303534       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:07:41.353267       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [81e653c21515d606ea13ae7cc6d22ed82d4602cf4029cf8f71ab38a7b6a21823] <==
	I1216 03:07:40.750579       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 03:07:40.752332       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 03:07:40.754140       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:07:40.754196       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:07:40.754239       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:07:40.754247       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:07:40.754254       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:07:40.754490       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 03:07:40.756518       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:07:40.757520       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:07:40.758378       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 03:07:40.760869       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:07:40.761016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:07:40.764319       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:07:40.764501       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:07:40.764593       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-742794"
	I1216 03:07:40.764662       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:07:40.766427       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 03:07:40.767617       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:07:40.769507       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 03:07:40.771910       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 03:07:40.773101       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:07:40.779269       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:07:40.780759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:07:40.784997       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [ab93683ff228de0b42359c8c20af8f7ff9fc95e2443f32138c095e7e5f671a02] <==
	I1216 03:07:38.285296       1 server_linux.go:53] "Using iptables proxy"
	I1216 03:07:38.356357       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:07:38.456919       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:07:38.456958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 03:07:38.457080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:07:38.482139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 03:07:38.482208       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:07:38.489329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:07:38.489909       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:07:38.489984       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:07:38.492036       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:07:38.492121       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:07:38.492173       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:07:38.492180       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:07:38.492082       1 config.go:200] "Starting service config controller"
	I1216 03:07:38.492196       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:07:38.492279       1 config.go:309] "Starting node config controller"
	I1216 03:07:38.492291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:07:38.492298       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:07:38.592335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:07:38.592409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:07:38.592411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cf6f05491bb981c385f482944e6fdb86fd324db78c798013d940ed415f22f291] <==
	I1216 03:07:35.332963       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:07:37.348497       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:07:37.348644       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:07:37.348663       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:07:37.348674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:07:37.383236       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:07:37.383263       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:07:37.386506       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:07:37.386571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:07:37.386795       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:07:37.386582       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:07:37.487102       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:07:41 embed-certs-742794 kubelet[726]: I1216 03:07:41.273070     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttvk\" (UniqueName: \"kubernetes.io/projected/0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6-kube-api-access-dttvk\") pod \"kubernetes-dashboard-855c9754f9-4srjf\" (UID: \"0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4srjf"
	Dec 16 03:07:44 embed-certs-742794 kubelet[726]: I1216 03:07:44.953639     726 scope.go:117] "RemoveContainer" containerID="0a508415f491dddd399750df78e09cc673a2434fb933adafa466abd31e00c266"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: I1216 03:07:45.964413     726 scope.go:117] "RemoveContainer" containerID="0a508415f491dddd399750df78e09cc673a2434fb933adafa466abd31e00c266"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: I1216 03:07:45.964672     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:45 embed-certs-742794 kubelet[726]: E1216 03:07:45.965175     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:46 embed-certs-742794 kubelet[726]: I1216 03:07:46.972286     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:46 embed-certs-742794 kubelet[726]: E1216 03:07:46.972513     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:49 embed-certs-742794 kubelet[726]: I1216 03:07:49.211061     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:07:49 embed-certs-742794 kubelet[726]: E1216 03:07:49.211310     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:07:52 embed-certs-742794 kubelet[726]: I1216 03:07:52.117522     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4srjf" podStartSLOduration=4.174256948 podStartE2EDuration="11.117499003s" podCreationTimestamp="2025-12-16 03:07:41 +0000 UTC" firstStartedPulling="2025-12-16 03:07:41.498416602 +0000 UTC m=+7.734941453" lastFinishedPulling="2025-12-16 03:07:48.441658668 +0000 UTC m=+14.678183508" observedRunningTime="2025-12-16 03:07:48.988181159 +0000 UTC m=+15.224706025" watchObservedRunningTime="2025-12-16 03:07:52.117499003 +0000 UTC m=+18.354023862"
	Dec 16 03:08:00 embed-certs-742794 kubelet[726]: I1216 03:08:00.871679     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: I1216 03:08:01.014880     726 scope.go:117] "RemoveContainer" containerID="1424aeeed8daabde30c0065b40cccfd5e98f3cb63f4485fa8edd804cd0b64a93"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: I1216 03:08:01.015288     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:01 embed-certs-742794 kubelet[726]: E1216 03:08:01.015492     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: I1216 03:08:09.038202     726 scope.go:117] "RemoveContainer" containerID="7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: I1216 03:08:09.211670     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:09 embed-certs-742794 kubelet[726]: E1216 03:08:09.211900     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:23 embed-certs-742794 kubelet[726]: I1216 03:08:23.872038     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: I1216 03:08:24.082667     726 scope.go:117] "RemoveContainer" containerID="9173f2d8540aad4aefaca7b1f1d0c54850ce35f68f6dd15e50f85d1440146d0f"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: I1216 03:08:24.082924     726 scope.go:117] "RemoveContainer" containerID="0916cce9701940870f2b8ae16ccc058651f60deb847bf86a60f5835ba4a1d9d6"
	Dec 16 03:08:24 embed-certs-742794 kubelet[726]: E1216 03:08:24.083135     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2wm6_kubernetes-dashboard(c747bed0-9c2a-4f91-8b84-732f59d4e000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2wm6" podUID="c747bed0-9c2a-4f91-8b84-732f59d4e000"
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:24 embed-certs-742794 systemd[1]: kubelet.service: Consumed 1.711s CPU time.
	
	
	==> kubernetes-dashboard [424c3093fc615de39945cad66d5ba586f5bee74a165ec3d30b0e055e1bbe7a17] <==
	2025/12/16 03:07:48 Using namespace: kubernetes-dashboard
	2025/12/16 03:07:48 Using in-cluster config to connect to apiserver
	2025/12/16 03:07:48 Using secret token for csrf signing
	2025/12/16 03:07:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 03:07:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 03:07:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 03:07:48 Generating JWE encryption key
	2025/12/16 03:07:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 03:07:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 03:07:48 Initializing JWE encryption key from synchronized object
	2025/12/16 03:07:48 Creating in-cluster Sidecar client
	2025/12/16 03:07:48 Serving insecurely on HTTP port: 9090
	2025/12/16 03:07:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:08:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 03:07:48 Starting overwatch
	
	
	==> storage-provisioner [7ec84b1f0e67e855f99417cf374785cc321c1144228ee7e236c867b350decd1f] <==
	I1216 03:07:38.235428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 03:08:08.239057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d82d7118ed08792206463d1a868ca050b89fbebec5b92ef3cba731e5da561d68] <==
	I1216 03:08:09.086372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 03:08:09.094036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 03:08:09.094078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 03:08:09.096160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:12.550994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:16.811788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:20.410373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:23.464941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.486780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.492440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:08:26.492626       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 03:08:26.492811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85!
	I1216 03:08:26.492810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcdc7c73-4d43-45a4-8fda-ffef275cc1fa", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85 became leader
	W1216 03:08:26.497182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:26.506031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 03:08:26.593557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-742794_8a662e1d-b1c1-4e53-bd5e-71ccf8636d85!
	W1216 03:08:28.509344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 03:08:28.514724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742794 -n embed-certs-742794
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742794 -n embed-certs-742794: exit status 2 (344.267977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-742794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.81s)

                                                
                                    

Test pass (351/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.02
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.01
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.83
31 TestOffline 53.24
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 123.28
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 9.41
57 TestAddons/StoppedEnableDisable 18.49
58 TestCertOptions 29.02
59 TestCertExpiration 214.67
61 TestForceSystemdFlag 24.66
62 TestForceSystemdEnv 42.54
67 TestErrorSpam/setup 21.05
68 TestErrorSpam/start 0.67
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 6.93
71 TestErrorSpam/unpause 5.56
72 TestErrorSpam/stop 18.08
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 38
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.16
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
84 TestFunctional/serial/CacheCmd/cache/add_local 0.91
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 47.87
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.18
95 TestFunctional/serial/LogsFileCmd 1.19
96 TestFunctional/serial/InvalidService 3.95
98 TestFunctional/parallel/ConfigCmd 0.52
99 TestFunctional/parallel/DashboardCmd 6.54
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.03
106 TestFunctional/parallel/ServiceCmdConnect 8.55
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 19.55
110 TestFunctional/parallel/SSHCmd 0.69
111 TestFunctional/parallel/CpCmd 2.23
112 TestFunctional/parallel/MySQL 20.32
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.96
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
122 TestFunctional/parallel/License 0.27
123 TestFunctional/parallel/Version/short 0.11
124 TestFunctional/parallel/Version/components 0.7
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 7.22
130 TestFunctional/parallel/ServiceCmd/DeployApp 8.14
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
138 TestFunctional/parallel/ProfileCmd/profile_list 0.39
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
140 TestFunctional/parallel/ServiceCmd/List 0.56
141 TestFunctional/parallel/MountCmd/any-port 6
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
144 TestFunctional/parallel/ServiceCmd/Format 0.42
145 TestFunctional/parallel/ServiceCmd/URL 0.36
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
153 TestFunctional/parallel/ImageCommands/ImageBuild 6.61
154 TestFunctional/parallel/ImageCommands/Setup 0.41
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.02
158 TestFunctional/parallel/MountCmd/specific-port 1.82
159 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 34.65
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.08
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.49
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.84
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.52
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 48
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.23
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.93
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 6.96
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.41
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 6.76
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 25.57
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.56
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.89
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.19
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.72
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.7
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.29
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.18
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.51
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.91
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.45
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.45
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.2
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.32
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 11.22
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.07
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.97
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.92
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.99
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.35
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.35
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.22
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.58
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.25
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.99
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.21
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 5.54
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.83
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 0.99
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.52
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.64
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 113.06
266 TestMultiControlPlane/serial/DeployApp 4.83
267 TestMultiControlPlane/serial/PingHostFromPods 1.09
268 TestMultiControlPlane/serial/AddWorkerNode 56.62
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
271 TestMultiControlPlane/serial/CopyFile 17.39
272 TestMultiControlPlane/serial/StopSecondaryNode 13.28
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.15
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.7
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.62
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
279 TestMultiControlPlane/serial/StopCluster 47.68
280 TestMultiControlPlane/serial/RestartCluster 51.05
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
282 TestMultiControlPlane/serial/AddSecondaryNode 40.64
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
288 TestJSONOutput/start/Command 40.91
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.11
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 30.42
314 TestKicCustomNetwork/use_default_bridge_network 21.86
315 TestKicExistingNetwork 25.92
316 TestKicCustomSubnet 23.89
317 TestKicStaticIP 26.99
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 50.67
322 TestMountStart/serial/StartWithMountFirst 4.81
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.83
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.22
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 90.23
334 TestMultiNode/serial/DeployApp2Nodes 3.45
335 TestMultiNode/serial/PingHostFrom2Pods 0.75
336 TestMultiNode/serial/AddNode 53.15
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.66
339 TestMultiNode/serial/CopyFile 9.96
340 TestMultiNode/serial/StopNode 2.28
341 TestMultiNode/serial/StartAfterStop 7.2
342 TestMultiNode/serial/RestartKeepsNodes 78.76
343 TestMultiNode/serial/DeleteNode 5.22
344 TestMultiNode/serial/StopMultiNode 30.37
345 TestMultiNode/serial/RestartMultiNode 50.97
346 TestMultiNode/serial/ValidateNameConflict 25.84
351 TestPreload 100.83
353 TestScheduledStopUnix 98.1
356 TestInsufficientStorage 8.8
357 TestRunningBinaryUpgrade 292.48
359 TestKubernetesUpgrade 293.81
360 TestMissingContainerUpgrade 66.08
362 TestStoppedBinaryUpgrade/Setup 0.79
363 TestPause/serial/Start 56.43
364 TestStoppedBinaryUpgrade/Upgrade 306.69
365 TestPause/serial/SecondStartNoReconfiguration 7.56
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
376 TestNoKubernetes/serial/StartWithK8s 24.43
377 TestNoKubernetes/serial/StartWithStopK8s 16.47
378 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
379 TestNoKubernetes/serial/Start 7.13
387 TestNetworkPlugins/group/false 3.78
388 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
389 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
390 TestNoKubernetes/serial/ProfileList 1.4
391 TestNoKubernetes/serial/Stop 1.31
395 TestNoKubernetes/serial/StartNoArgs 9.09
397 TestStartStop/group/old-k8s-version/serial/FirstStart 50.95
398 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
400 TestStartStop/group/no-preload/serial/FirstStart 47.97
401 TestStartStop/group/old-k8s-version/serial/DeployApp 8.29
402 TestStartStop/group/no-preload/serial/DeployApp 7.26
404 TestStartStop/group/old-k8s-version/serial/Stop 16.08
406 TestStartStop/group/no-preload/serial/Stop 16.81
408 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.28
409 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
410 TestStartStop/group/old-k8s-version/serial/SecondStart 46.83
411 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
412 TestStartStop/group/no-preload/serial/SecondStart 48.45
413 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
415 TestStartStop/group/newest-cni/serial/FirstStart 24.3
417 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.3
418 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
419 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
420 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
421 TestStartStop/group/newest-cni/serial/DeployApp 0
423 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
425 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
426 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.24
427 TestStartStop/group/newest-cni/serial/Stop 8.29
428 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
429 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
431 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
432 TestStartStop/group/newest-cni/serial/SecondStart 13.08
434 TestStartStop/group/embed-certs/serial/FirstStart 44.49
435 TestNetworkPlugins/group/auto/Start 38.92
436 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
437 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
438 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
440 TestNetworkPlugins/group/kindnet/Start 43.64
441 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
442 TestStartStop/group/embed-certs/serial/DeployApp 7.23
443 TestNetworkPlugins/group/auto/KubeletFlags 0.33
444 TestNetworkPlugins/group/auto/NetCatPod 8.2
445 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
447 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
449 TestStartStop/group/embed-certs/serial/Stop 16.24
450 TestNetworkPlugins/group/auto/DNS 0.12
451 TestNetworkPlugins/group/auto/Localhost 0.09
452 TestNetworkPlugins/group/auto/HairPin 0.1
453 TestNetworkPlugins/group/calico/Start 49.73
454 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
455 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
456 TestStartStop/group/embed-certs/serial/SecondStart 47.33
457 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
458 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
459 TestNetworkPlugins/group/custom-flannel/Start 51.17
460 TestNetworkPlugins/group/kindnet/DNS 0.19
461 TestNetworkPlugins/group/kindnet/Localhost 0.14
462 TestNetworkPlugins/group/kindnet/HairPin 0.13
463 TestNetworkPlugins/group/enable-default-cni/Start 64.64
464 TestNetworkPlugins/group/calico/ControllerPod 6.01
465 TestNetworkPlugins/group/calico/KubeletFlags 0.32
466 TestNetworkPlugins/group/calico/NetCatPod 8.26
467 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
468 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
469 TestNetworkPlugins/group/calico/DNS 0.12
470 TestNetworkPlugins/group/calico/Localhost 0.1
471 TestNetworkPlugins/group/calico/HairPin 0.1
472 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
473 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
474 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
476 TestNetworkPlugins/group/custom-flannel/DNS 0.13
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
479 TestNetworkPlugins/group/flannel/Start 52.83
480 TestNetworkPlugins/group/bridge/Start 67.32
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
486 TestNetworkPlugins/group/flannel/ControllerPod 6.01
487 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
488 TestNetworkPlugins/group/flannel/NetCatPod 7.17
489 TestNetworkPlugins/group/flannel/DNS 0.13
490 TestNetworkPlugins/group/flannel/Localhost 0.09
491 TestNetworkPlugins/group/flannel/HairPin 0.09
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
493 TestNetworkPlugins/group/bridge/NetCatPod 9.19
494 TestNetworkPlugins/group/bridge/DNS 0.13
495 TestNetworkPlugins/group/bridge/Localhost 0.09
496 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.38892557s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 02:25:11.719758    8586 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 02:25:11.719863    8586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-407168
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-407168: exit status 85 (71.630197ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-407168 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:07.380769    8599 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:07.380993    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:07.381002    8599 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:07.381007    8599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:07.381193    8599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	W1216 02:25:07.381309    8599 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22158-5058/.minikube/config/config.json: open /home/jenkins/minikube-integration/22158-5058/.minikube/config/config.json: no such file or directory
	I1216 02:25:07.381731    8599 out.go:368] Setting JSON to true
	I1216 02:25:07.382583    8599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":459,"bootTime":1765851448,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:07.382644    8599 start.go:143] virtualization: kvm guest
	I1216 02:25:07.386410    8599 out.go:99] [download-only-407168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:07.386534    8599 notify.go:221] Checking for updates...
	W1216 02:25:07.386556    8599 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 02:25:07.387833    8599 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:07.389099    8599 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:07.390393    8599 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:25:07.391584    8599 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:25:07.392691    8599 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:07.394907    8599 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:07.395112    8599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:07.420345    8599 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:25:07.420420    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:07.655478    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-16 02:25:07.644651618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:07.655586    8599 docker.go:319] overlay module found
	I1216 02:25:07.657247    8599 out.go:99] Using the docker driver based on user configuration
	I1216 02:25:07.657270    8599 start.go:309] selected driver: docker
	I1216 02:25:07.657276    8599 start.go:927] validating driver "docker" against <nil>
	I1216 02:25:07.657357    8599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:07.712125    8599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-16 02:25:07.701183331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:07.712330    8599 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:07.712893    8599 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 02:25:07.713066    8599 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:07.714716    8599 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-407168 host does not exist
	  To start a cluster, run: "minikube start -p download-only-407168"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-407168
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-217377 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-217377 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.015313012s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 02:25:15.167974    8586 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 02:25:15.168004    8586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-217377
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-217377: exit status 85 (74.413623ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-407168 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-407168                                                                                                                                                   │ download-only-407168 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-217377 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-217377 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:12.204394    8955 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:12.204592    8955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:12.204599    8955 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:12.204604    8955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:12.204756    8955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:25:12.205211    8955 out.go:368] Setting JSON to true
	I1216 02:25:12.205983    8955 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":464,"bootTime":1765851448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:12.206034    8955 start.go:143] virtualization: kvm guest
	I1216 02:25:12.207953    8955 out.go:99] [download-only-217377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:12.208069    8955 notify.go:221] Checking for updates...
	I1216 02:25:12.209325    8955 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:12.210792    8955 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:12.212185    8955 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:25:12.213392    8955 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:25:12.214501    8955 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:12.216654    8955 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:12.216888    8955 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:12.240362    8955 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:25:12.240427    8955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:12.294272    8955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-16 02:25:12.284986843 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:12.294382    8955 docker.go:319] overlay module found
	I1216 02:25:12.296044    8955 out.go:99] Using the docker driver based on user configuration
	I1216 02:25:12.296070    8955 start.go:309] selected driver: docker
	I1216 02:25:12.296078    8955 start.go:927] validating driver "docker" against <nil>
	I1216 02:25:12.296153    8955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:12.350511    8955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-16 02:25:12.341341767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:12.350673    8955 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:12.351247    8955 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 02:25:12.351397    8955 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:12.353382    8955 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-217377 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-217377
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-388456 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-388456 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.013020006s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 02:25:19.635776    8586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 02:25:19.635812    8586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-388456
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-388456: exit status 85 (73.943075ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-407168 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-407168 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-407168                                                                                                                                                          │ download-only-407168 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-217377 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-217377 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-217377                                                                                                                                                          │ download-only-217377 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-388456 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-388456 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:15.675721    9315 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:15.675833    9315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:15.675843    9315 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:15.675847    9315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:15.676054    9315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:25:15.676516    9315 out.go:368] Setting JSON to true
	I1216 02:25:15.677363    9315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":468,"bootTime":1765851448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:15.677421    9315 start.go:143] virtualization: kvm guest
	I1216 02:25:15.679902    9315 out.go:99] [download-only-388456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:15.680071    9315 notify.go:221] Checking for updates...
	I1216 02:25:15.681395    9315 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:15.683064    9315 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:15.684476    9315 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:25:15.685758    9315 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:25:15.686973    9315 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:15.689363    9315 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:15.689642    9315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:15.713435    9315 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:25:15.713499    9315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:15.770488    9315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-16 02:25:15.759627251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:15.770580    9315 docker.go:319] overlay module found
	I1216 02:25:15.772242    9315 out.go:99] Using the docker driver based on user configuration
	I1216 02:25:15.772276    9315 start.go:309] selected driver: docker
	I1216 02:25:15.772284    9315 start.go:927] validating driver "docker" against <nil>
	I1216 02:25:15.772350    9315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:25:15.825918    9315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-16 02:25:15.816591948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:25:15.826065    9315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:15.826504    9315 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 02:25:15.826638    9315 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:15.828357    9315 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-388456 host does not exist
	  To start a cluster, run: "minikube start -p download-only-388456"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-388456
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-622909 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-622909" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-622909
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 02:25:20.896891    8586 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-346468 --alsologtostderr --binary-mirror http://127.0.0.1:41995 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-346468" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-346468
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (53.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-827391 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-827391 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.702532461s)
helpers_test.go:176: Cleaning up "offline-crio-827391" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-827391
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-827391: (2.538699996s)
--- PASS: TestOffline (53.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-568105
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-568105: exit status 85 (62.242763ms)

                                                
                                                
-- stdout --
	* Profile "addons-568105" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-568105"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-568105
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-568105: exit status 85 (63.033118ms)

                                                
                                                
-- stdout --
	* Profile "addons-568105" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-568105"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-568105 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-568105 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.284029797s)
--- PASS: TestAddons/Setup (123.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-568105 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-568105 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-568105 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-568105 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [12352787-47ea-402d-9f11-e5894590b258] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [12352787-47ea-402d-9f11-e5894590b258] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.002854916s
addons_test.go:696: (dbg) Run:  kubectl --context addons-568105 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-568105 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-568105 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-568105
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-568105: (18.197458828s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-568105
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-568105
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-568105
--- PASS: TestAddons/StoppedEnableDisable (18.49s)

                                                
                                    
x
+
TestCertOptions (29.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-436902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-436902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.711574014s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-436902 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-436902 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-436902 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-436902" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-436902
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-436902: (2.473080993s)
--- PASS: TestCertOptions (29.02s)

                                                
                                    
x
+
TestCertExpiration (214.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-332150 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-332150 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.979262433s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-332150 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-332150 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.231383302s)
helpers_test.go:176: Cleaning up "cert-expiration-332150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-332150
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-332150: (2.461573975s)
--- PASS: TestCertExpiration (214.67s)

                                                
                                    
x
+
TestForceSystemdFlag (24.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-546137 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-546137 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.864735632s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-546137 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-546137" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-546137
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-546137: (2.475448132s)
--- PASS: TestForceSystemdFlag (24.66s)

                                                
                                    
x
+
TestForceSystemdEnv (42.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-849216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-849216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.891548656s)
helpers_test.go:176: Cleaning up "force-systemd-env-849216" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-849216
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-849216: (2.645814388s)
--- PASS: TestForceSystemdEnv (42.54s)

                                                
                                    
x
+
TestErrorSpam/setup (21.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-370478 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-370478 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-370478 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-370478 --driver=docker  --container-runtime=crio: (21.04935156s)
--- PASS: TestErrorSpam/setup (21.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (6.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause: exit status 80 (2.318820366s)

                                                
                                                
-- stdout --
	* Pausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause: exit status 80 (2.361928251s)

                                                
                                                
-- stdout --
	* Pausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause: exit status 80 (2.245241393s)

                                                
                                                
-- stdout --
	* Pausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:30:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause: exit status 80 (2.273498832s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:31:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause: exit status 80 (1.589882306s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause: exit status 80 (1.694410318s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-370478 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T02:31:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.56s)

                                                
                                    
x
+
TestErrorSpam/stop (18.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 stop: (17.880193894s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-370478 --log_dir /tmp/nospam-370478 stop
--- PASS: TestErrorSpam/stop (18.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/test/nested/copy/8586/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-781918 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.00268579s)
--- PASS: TestFunctional/serial/StartWithProxy (38.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 02:32:05.866493    8586 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-781918 --alsologtostderr -v=8: (6.162524377s)
functional_test.go:678: soft start took 6.163426944s for "functional-781918" cluster.
I1216 02:32:12.029397    8586 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-781918 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-781918 /tmp/TestFunctionalserialCacheCmdcacheadd_local3295275812/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache add minikube-local-cache-test:functional-781918
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache delete minikube-local-cache-test:functional-781918
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-781918
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.204687ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 kubectl -- --context functional-781918 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-781918 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 02:32:25.724154    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:25.730511    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:25.741896    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:25.763250    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:25.804674    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:25.886136    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:26.047662    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:26.369366    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:27.011414    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:28.292986    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:30.855896    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:35.977428    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:32:46.219731    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-781918 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.869628052s)
functional_test.go:776: restart took 47.869743937s for "functional-781918" cluster.
I1216 02:33:05.792325    8586 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-781918 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 logs
E1216 02:33:06.701315    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 logs: (1.179950979s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 logs --file /tmp/TestFunctionalserialLogsFileCmd2231438880/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 logs --file /tmp/TestFunctionalserialLogsFileCmd2231438880/001/logs.txt: (1.186841398s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-781918 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-781918
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-781918: exit status 115 (339.927158ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30299 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-781918 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 config get cpus: exit status 14 (122.033933ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 config get cpus: exit status 14 (87.077141ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-781918 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-781918 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 44093: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-781918 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.867908ms)

                                                
                                                
-- stdout --
	* [functional-781918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:33:22.696087   43317 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:33:22.696215   43317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:33:22.696226   43317 out.go:374] Setting ErrFile to fd 2...
	I1216 02:33:22.696230   43317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:33:22.696610   43317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:33:22.697264   43317 out.go:368] Setting JSON to false
	I1216 02:33:22.699318   43317 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":955,"bootTime":1765851448,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:33:22.699450   43317 start.go:143] virtualization: kvm guest
	I1216 02:33:22.704323   43317 out.go:179] * [functional-781918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:33:22.705729   43317 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:33:22.705735   43317 notify.go:221] Checking for updates...
	I1216 02:33:22.707213   43317 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:33:22.708614   43317 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:33:22.709762   43317 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:33:22.711187   43317 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:33:22.715974   43317 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:33:22.717741   43317 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:33:22.718555   43317 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:33:22.748712   43317 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:33:22.748793   43317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:33:22.818237   43317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 02:33:22.807618033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:33:22.818391   43317 docker.go:319] overlay module found
	I1216 02:33:22.821101   43317 out.go:179] * Using the docker driver based on existing profile
	I1216 02:33:22.823379   43317 start.go:309] selected driver: docker
	I1216 02:33:22.823396   43317 start.go:927] validating driver "docker" against &{Name:functional-781918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-781918 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:33:22.823498   43317 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:33:22.825434   43317 out.go:203] 
	W1216 02:33:22.826602   43317 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 02:33:22.827714   43317 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-781918 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-781918 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.490222ms)

                                                
                                                
-- stdout --
	* [functional-781918] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:33:22.512033   43183 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:33:22.512129   43183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:33:22.512136   43183 out.go:374] Setting ErrFile to fd 2...
	I1216 02:33:22.512141   43183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:33:22.512420   43183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:33:22.512838   43183 out.go:368] Setting JSON to false
	I1216 02:33:22.513754   43183 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":954,"bootTime":1765851448,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:33:22.513841   43183 start.go:143] virtualization: kvm guest
	I1216 02:33:22.516512   43183 out.go:179] * [functional-781918] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 02:33:22.517658   43183 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:33:22.517665   43183 notify.go:221] Checking for updates...
	I1216 02:33:22.520077   43183 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:33:22.521365   43183 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:33:22.522532   43183 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:33:22.523721   43183 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:33:22.525037   43183 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:33:22.526938   43183 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:33:22.527553   43183 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:33:22.553471   43183 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:33:22.553556   43183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:33:22.618348   43183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 02:33:22.606848848 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:33:22.618492   43183 docker.go:319] overlay module found
	I1216 02:33:22.620599   43183 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 02:33:22.621905   43183 start.go:309] selected driver: docker
	I1216 02:33:22.621920   43183 start.go:927] validating driver "docker" against &{Name:functional-781918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-781918 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:33:22.622031   43183 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:33:22.623879   43183 out.go:203] 
	W1216 02:33:22.625117   43183 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 02:33:22.626316   43183 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-781918 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-781918 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-9rjm8" [cb369616-0302-4410-8a74-4bd4b533e373] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-9rjm8" [cb369616-0302-4410-8a74-4bd4b533e373] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004395376s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30770
functional_test.go:1680: http://192.168.49.2:30770: success! body:
Request served by hello-node-connect-7d85dfc575-9rjm8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30770
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [76e2dc99-c111-46b2-ac31-a12f869f17ff] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002934967s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-781918 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-781918 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-781918 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-781918 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:33:19.041840    8586 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1208e73a-8a40-40e1-94da-47900f1c2f03] Pending
helpers_test.go:353: "sp-pod" [1208e73a-8a40-40e1-94da-47900f1c2f03] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003679205s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-781918 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-781918 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-781918 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [bec297a5-f6c3-4559-a951-1261c167e36e] Pending
helpers_test.go:353: "sp-pod" [bec297a5-f6c3-4559-a951-1261c167e36e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005354929s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-781918 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh -n functional-781918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cp functional-781918:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd345781397/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh -n functional-781918 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh -n functional-781918 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-781918 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-n5gzl" [f0702344-7534-492c-8d41-a73f6b2ca80d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-n5gzl" [f0702344-7534-492c-8d41-a73f6b2ca80d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.034900549s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;": exit status 1 (126.106039ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:33:44.623253    8586 retry.go:31] will retry after 1.148158246s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;": exit status 1 (89.475746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:33:45.861701    8586 retry.go:31] will retry after 1.604083315s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;": exit status 1 (87.124746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:33:47.554752    8586 retry.go:31] will retry after 2.943735355s: exit status 1
E1216 02:33:47.662953    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-781918 exec mysql-6bcdcbc558-n5gzl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8586/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /etc/test/nested/copy/8586/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8586.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /etc/ssl/certs/8586.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8586.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /usr/share/ca-certificates/8586.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /etc/ssl/certs/85862.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /usr/share/ca-certificates/85862.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-781918 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "sudo systemctl is-active docker": exit status 1 (361.670408ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "sudo systemctl is-active containerd": exit status 1 (329.521758ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 40290: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-781918 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [1440ffdc-26a1-4ad3-b6c2-984df3fbcc04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [1440ffdc-26a1-4ad3-b6c2-984df3fbcc04] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.002813043s
I1216 02:33:19.893158    8586 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-781918 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-781918 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-6npnd" [044465f5-db12-4e2e-a143-cf4df786983c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-6npnd" [044465f5-db12-4e2e-a143-cf4df786983c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003824993s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-781918 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.84.80 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-781918 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "328.118571ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.056622ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "345.625592ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "72.383503ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdany-port3838989462/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765852401270970494" to /tmp/TestFunctionalparallelMountCmdany-port3838989462/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765852401270970494" to /tmp/TestFunctionalparallelMountCmdany-port3838989462/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765852401270970494" to /tmp/TestFunctionalparallelMountCmdany-port3838989462/001/test-1765852401270970494
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (314.472399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:33:21.585743    8586 retry.go:31] will retry after 398.764023ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 02:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 02:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 02:33 test-1765852401270970494
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh cat /mount-9p/test-1765852401270970494
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-781918 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [4ed3f111-82dd-4c1c-bfc9-7ed3be4299f4] Pending
helpers_test.go:353: "busybox-mount" [4ed3f111-82dd-4c1c-bfc9-7ed3be4299f4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [4ed3f111-82dd-4c1c-bfc9-7ed3be4299f4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [4ed3f111-82dd-4c1c-bfc9-7ed3be4299f4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002743241s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-781918 logs busybox-mount
I1216 02:33:26.084485    8586 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdany-port3838989462/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service list -o json
functional_test.go:1504: Took "525.414926ms" to run "out/minikube-linux-amd64 -p functional-781918 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31517
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31517
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh pgrep buildkitd: exit status 1 (344.064613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image build -t localhost/my-image:functional-781918 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image build -t localhost/my-image:functional-781918 testdata/build --alsologtostderr: (6.031942718s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-781918 image build -t localhost/my-image:functional-781918 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4209c4bf82a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-781918
--> 7e4ec030d15
Successfully tagged localhost/my-image:functional-781918
7e4ec030d15fdcea742d2858cf4022fa8cfb77b6bcbfa2924ab9c847e352f838
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-781918 image build -t localhost/my-image:functional-781918 testdata/build --alsologtostderr:
I1216 02:33:33.582022   48889 out.go:360] Setting OutFile to fd 1 ...
I1216 02:33:33.584272   48889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:33.584285   48889 out.go:374] Setting ErrFile to fd 2...
I1216 02:33:33.584290   48889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:33:33.584565   48889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:33:33.585301   48889 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:33.586156   48889 config.go:182] Loaded profile config "functional-781918": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:33:33.586786   48889 cli_runner.go:164] Run: docker container inspect functional-781918 --format={{.State.Status}}
I1216 02:33:33.608493   48889 ssh_runner.go:195] Run: systemctl --version
I1216 02:33:33.608539   48889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-781918
I1216 02:33:33.631440   48889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-781918/id_rsa Username:docker}
I1216 02:33:33.736197   48889 build_images.go:162] Building image from path: /tmp/build.1724827246.tar
I1216 02:33:33.736261   48889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 02:33:33.750392   48889 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1724827246.tar
I1216 02:33:33.755017   48889 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1724827246.tar: stat -c "%s %y" /var/lib/minikube/build/build.1724827246.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1724827246.tar': No such file or directory
I1216 02:33:33.755052   48889 ssh_runner.go:362] scp /tmp/build.1724827246.tar --> /var/lib/minikube/build/build.1724827246.tar (3072 bytes)
I1216 02:33:33.775170   48889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1724827246
I1216 02:33:33.784980   48889 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1724827246 -xf /var/lib/minikube/build/build.1724827246.tar
I1216 02:33:33.795833   48889 crio.go:315] Building image: /var/lib/minikube/build/build.1724827246
I1216 02:33:33.795936   48889 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-781918 /var/lib/minikube/build/build.1724827246 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 02:33:39.521365   48889 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-781918 /var/lib/minikube/build/build.1724827246 --cgroup-manager=cgroupfs: (5.725402025s)
I1216 02:33:39.521419   48889 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1724827246
I1216 02:33:39.529587   48889 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1724827246.tar
I1216 02:33:39.537312   48889 build_images.go:218] Built localhost/my-image:functional-781918 from /tmp/build.1724827246.tar
I1216 02:33:39.537350   48889 build_images.go:134] succeeded building to: functional-781918
I1216 02:33:39.537357   48889 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-781918
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image load --daemon kicbase/echo-server:functional-781918 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image load --daemon kicbase/echo-server:functional-781918 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-781918
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image load --daemon kicbase/echo-server:functional-781918 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-781918 image load --daemon kicbase/echo-server:functional-781918 --alsologtostderr: (1.606973694s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdspecific-port857760325/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.521067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:33:27.610891    8586 retry.go:31] will retry after 348.405234ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdspecific-port857760325/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "sudo umount -f /mount-9p": exit status 1 (286.790447ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-781918 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdspecific-port857760325/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image save kicbase/echo-server:functional-781918 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image rm kicbase/echo-server:functional-781918 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T" /mount1
2025/12/16 02:33:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T" /mount1: exit status 1 (351.23858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:33:29.448773    8586 retry.go:31] will retry after 539.727879ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-781918 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-781918 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2100050578/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-781918
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-781918 image save --daemon kicbase/echo-server:functional-781918 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-781918
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-781918
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-781918
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-781918
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-5058/.minikube/files/etc/test/nested/copy/8586/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (34.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-986152 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (34.654237584s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (34.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 02:34:28.393802    8586 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-986152 --alsologtostderr -v=8: (6.078422213s)
functional_test.go:678: soft start took 6.078762796s for "functional-986152" cluster.
I1216 02:34:34.472634    8586 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-986152 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1155835784/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache add minikube-local-cache-test:functional-986152
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache delete minikube-local-cache-test:functional-986152
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.984742ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 kubectl -- --context functional-986152 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-986152 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 02:35:09.587443    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-986152 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.004156187s)
functional_test.go:776: restart took 48.004262993s for "functional-986152" cluster.
I1216 02:35:28.191516    8586 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (48.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-986152 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-986152 logs: (1.195020526s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs544542058/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-986152 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs544542058/001/logs.txt: (1.23349286s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-986152 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-986152
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-986152: exit status 115 (345.542025ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30461 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-986152 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 config get cpus: exit status 14 (80.042961ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 config get cpus: exit status 14 (83.963918ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-986152 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-986152 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 61642: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-986152 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (180.957817ms)

                                                
                                                
-- stdout --
	* [functional-986152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:35:36.753172   60638 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:35:36.753469   60638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:36.753479   60638 out.go:374] Setting ErrFile to fd 2...
	I1216 02:35:36.753486   60638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:36.753766   60638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:35:36.754235   60638 out.go:368] Setting JSON to false
	I1216 02:35:36.755493   60638 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1089,"bootTime":1765851448,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:35:36.755572   60638 start.go:143] virtualization: kvm guest
	I1216 02:35:36.759952   60638 out.go:179] * [functional-986152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:35:36.761408   60638 notify.go:221] Checking for updates...
	I1216 02:35:36.761420   60638 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:35:36.762739   60638 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:35:36.763927   60638 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:35:36.765072   60638 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:35:36.766284   60638 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:35:36.767566   60638 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:35:36.769497   60638 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 02:35:36.770339   60638 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:35:36.799129   60638 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:35:36.799271   60638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:35:36.857065   60638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 02:35:36.847149031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:35:36.857168   60638 docker.go:319] overlay module found
	I1216 02:35:36.858990   60638 out.go:179] * Using the docker driver based on existing profile
	I1216 02:35:36.860724   60638 start.go:309] selected driver: docker
	I1216 02:35:36.860744   60638 start.go:927] validating driver "docker" against &{Name:functional-986152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-986152 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:35:36.860886   60638 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:35:36.862711   60638 out.go:203] 
	W1216 02:35:36.863966   60638 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 02:35:36.865330   60638 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-986152 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-986152 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (187.116703ms)

                                                
                                                
-- stdout --
	* [functional-986152] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:35:36.562383   60456 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:35:36.562468   60456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:36.562473   60456 out.go:374] Setting ErrFile to fd 2...
	I1216 02:35:36.562476   60456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:36.562761   60456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:35:36.563189   60456 out.go:368] Setting JSON to false
	I1216 02:35:36.564281   60456 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1089,"bootTime":1765851448,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:35:36.564332   60456 start.go:143] virtualization: kvm guest
	I1216 02:35:36.566454   60456 out.go:179] * [functional-986152] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 02:35:36.569727   60456 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:35:36.569759   60456 notify.go:221] Checking for updates...
	I1216 02:35:36.575425   60456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:35:36.576682   60456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 02:35:36.577982   60456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 02:35:36.579092   60456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:35:36.580332   60456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:35:36.582158   60456 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 02:35:36.582924   60456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:35:36.611557   60456 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 02:35:36.611694   60456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:35:36.675244   60456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 02:35:36.664248795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:35:36.675408   60456 docker.go:319] overlay module found
	I1216 02:35:36.678191   60456 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 02:35:36.679368   60456 start.go:309] selected driver: docker
	I1216 02:35:36.679390   60456 start.go:927] validating driver "docker" against &{Name:functional-986152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-986152 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:35:36.679492   60456 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:35:36.681420   60456 out.go:203] 
	W1216 02:35:36.682573   60456 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 02:35:36.683744   60456 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (6.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-986152 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-986152 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-h2tlm" [f5ec8b4c-b51b-4125-9177-067799a63c76] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-h2tlm" [f5ec8b4c-b51b-4125-9177-067799a63c76] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003601337s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31618
functional_test.go:1680: http://192.168.49.2:31618: success! body:
Request served by hello-node-connect-9f67c86d4-h2tlm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31618
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (6.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e9e70ace-4d3b-4bb6-8b5e-ac113a3618fe] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004804459s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-986152 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-986152 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-986152 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-986152 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:35:49.594553    8586 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [561714f6-7ef0-47c8-b8dd-026f2320e9ef] Pending
helpers_test.go:353: "sp-pod" [561714f6-7ef0-47c8-b8dd-026f2320e9ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [561714f6-7ef0-47c8-b8dd-026f2320e9ef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003531031s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-986152 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-986152 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-986152 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:36:03.548676    8586 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [70092885-cd95-4b0e-9425-7924a00eba64] Pending
helpers_test.go:353: "sp-pod" [70092885-cd95-4b0e-9425-7924a00eba64] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003519367s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-986152 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh -n functional-986152 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cp functional-986152:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2260115196/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh -n functional-986152 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh -n functional-986152 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-986152 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-4tgrf" [cf7c29d5-46de-4304-b2b9-9ea43fb60ce5] Pending
helpers_test.go:353: "mysql-7d7b65bc95-4tgrf" [cf7c29d5-46de-4304-b2b9-9ea43fb60ce5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-4tgrf" [cf7c29d5-46de-4304-b2b9-9ea43fb60ce5] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.00381669s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;": exit status 1 (98.372439ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:01.727806    8586 retry.go:31] will retry after 694.554284ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;": exit status 1 (100.042294ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:02.523655    8586 retry.go:31] will retry after 1.917076805s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;": exit status 1 (119.154334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:04.561062    8586 retry.go:31] will retry after 1.762734162s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;": exit status 1 (88.931026ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:06.413965    8586 retry.go:31] will retry after 4.149606649s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-986152 exec mysql-7d7b65bc95-4tgrf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8586/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /etc/test/nested/copy/8586/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8586.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /etc/ssl/certs/8586.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8586.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /usr/share/ca-certificates/8586.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /etc/ssl/certs/85862.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /usr/share/ca-certificates/85862.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-986152 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "sudo systemctl is-active docker": exit status 1 (360.260526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "sudo systemctl is-active containerd": exit status 1 (340.673056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-986152 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-986152 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-p8c2f" [077b4f38-b65e-41ad-badf-0add7b11f096] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-p8c2f" [077b4f38-b65e-41ad-badf-0add7b11f096] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003584851s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo709919815/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765852535088996387" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo709919815/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765852535088996387" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo709919815/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765852535088996387" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo709919815/001/test-1765852535088996387
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.07025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:35:35.406462    8586 retry.go:31] will retry after 316.626302ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 test-1765852535088996387
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh cat /mount-9p/test-1765852535088996387
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-986152 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [205bf9e3-24b1-4a0a-8ad0-85591649102e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [205bf9e3-24b1-4a0a-8ad0-85591649102e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [205bf9e3-24b1-4a0a-8ad0-85591649102e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004164655s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-986152 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo709919815/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "376.777154ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.328479ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "381.517348ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.251456ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 61673: os: process already finished
helpers_test.go:520: unable to terminate pid 61483: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-986152 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [7a6f7485-5caa-4b7a-94a6-b1f39b7e1c72] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [7a6f7485-5caa-4b7a-94a6-b1f39b7e1c72] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004213625s
I1216 02:35:49.604704    8586 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3249831859/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.205635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:35:42.326272    8586 retry.go:31] will retry after 583.023643ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3249831859/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "sudo umount -f /mount-9p": exit status 1 (284.693829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-986152 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3249831859/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service list -o json
2025/12/16 02:35:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1504: Took "918.10653ms" to run "out/minikube-linux-amd64 -p functional-986152 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T" /mount1: exit status 1 (335.608594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:35:44.398054    8586 retry.go:31] will retry after 722.094469ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-986152 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-986152 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3266352232/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31044
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31044
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-986152 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.2.73 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-986152 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-986152 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-986152
localhost/kicbase/echo-server:functional-986152
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-986152 image ls --format short --alsologtostderr:
I1216 02:36:00.126728   67420 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:00.126970   67420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.126983   67420 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:00.126988   67420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.127185   67420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:36:00.127863   67420 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.127996   67420 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.128466   67420 cli_runner.go:164] Run: docker container inspect functional-986152 --format={{.State.Status}}
I1216 02:36:00.146917   67420 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:00.146963   67420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-986152
I1216 02:36:00.165927   67420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-986152/id_rsa Username:docker}
I1216 02:36:00.265790   67420 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-986152 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-986152  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-986152  │ abc5b69fba3e9 │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-986152 image ls --format table --alsologtostderr:
I1216 02:36:00.619230   67691 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:00.619327   67691 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.619338   67691 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:00.619343   67691 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.619597   67691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:36:00.620144   67691 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.620229   67691 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.620645   67691 cli_runner.go:164] Run: docker container inspect functional-986152 --format={{.State.Status}}
I1216 02:36:00.641794   67691 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:00.641876   67691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-986152
I1216 02:36:00.662797   67691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-986152/id_rsa Username:docker}
I1216 02:36:00.761409   67691 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-986152 image ls --format json --alsologtostderr:
[{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","
localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-986152"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f
86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/cor
edns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"07655ddf2eeb
e5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"abc5b69fba3e92f8377549c863b498a794a51e87ffc3edb68db73ba5551d6cdf","repoDigests":["localhost/minikube-local-cache-test@sha256:75e1619d3e3475e25326d03ddf2319054b1bfac32e606a4da1997f0999e31021"],"repoTags":["localhost/minikube-local-cache-test:functional-986152"],"size":"3330"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","r
epoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854a
d5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-986152 image ls --format json --alsologtostderr:
I1216 02:36:00.369804   67527 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:00.369898   67527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.369905   67527 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:00.369909   67527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.370100   67527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:36:00.370598   67527 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.370680   67527 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.371138   67527 cli_runner.go:164] Run: docker container inspect functional-986152 --format={{.State.Status}}
I1216 02:36:00.389600   67527 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:00.389652   67527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-986152
I1216 02:36:00.408482   67527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-986152/id_rsa Username:docker}
I1216 02:36:00.511960   67527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-986152 image ls --format yaml --alsologtostderr:
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-986152
size: "4944818"
- id: abc5b69fba3e92f8377549c863b498a794a51e87ffc3edb68db73ba5551d6cdf
repoDigests:
- localhost/minikube-local-cache-test@sha256:75e1619d3e3475e25326d03ddf2319054b1bfac32e606a4da1997f0999e31021
repoTags:
- localhost/minikube-local-cache-test:functional-986152
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-986152 image ls --format yaml --alsologtostderr:
I1216 02:36:00.127083   67421 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:00.127196   67421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.127202   67421 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:00.127208   67421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.127454   67421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:36:00.128114   67421 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.128236   67421 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.128675   67421 cli_runner.go:164] Run: docker container inspect functional-986152 --format={{.State.Status}}
I1216 02:36:00.147879   67421 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:00.147928   67421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-986152
I1216 02:36:00.166909   67421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-986152/id_rsa Username:docker}
I1216 02:36:00.265783   67421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-986152 ssh pgrep buildkitd: exit status 1 (278.377084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image build -t localhost/my-image:functional-986152 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-986152 image build -t localhost/my-image:functional-986152 testdata/build --alsologtostderr: (2.474956472s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-986152 image build -t localhost/my-image:functional-986152 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c32b371d661
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-986152
--> 6be7315b615
Successfully tagged localhost/my-image:functional-986152
6be7315b615eb5fbd4d4c838c7351a4fbb07e76b1a31e864c6d32e5169cb9d12
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-986152 image build -t localhost/my-image:functional-986152 testdata/build --alsologtostderr:
I1216 02:36:00.652355   67702 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:00.652601   67702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.652609   67702 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:00.652613   67702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:00.652813   67702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
I1216 02:36:00.653347   67702 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.654090   67702 config.go:182] Loaded profile config "functional-986152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:36:00.654666   67702 cli_runner.go:164] Run: docker container inspect functional-986152 --format={{.State.Status}}
I1216 02:36:00.673294   67702 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:00.673338   67702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-986152
I1216 02:36:00.692750   67702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/functional-986152/id_rsa Username:docker}
I1216 02:36:00.791025   67702 build_images.go:162] Building image from path: /tmp/build.713376579.tar
I1216 02:36:00.791091   67702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 02:36:00.799046   67702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.713376579.tar
I1216 02:36:00.802617   67702 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.713376579.tar: stat -c "%s %y" /var/lib/minikube/build/build.713376579.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.713376579.tar': No such file or directory
I1216 02:36:00.802644   67702 ssh_runner.go:362] scp /tmp/build.713376579.tar --> /var/lib/minikube/build/build.713376579.tar (3072 bytes)
I1216 02:36:00.820489   67702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.713376579
I1216 02:36:00.827807   67702 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.713376579 -xf /var/lib/minikube/build/build.713376579.tar
I1216 02:36:00.835525   67702 crio.go:315] Building image: /var/lib/minikube/build/build.713376579
I1216 02:36:00.835573   67702 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-986152 /var/lib/minikube/build/build.713376579 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 02:36:03.034758   67702 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-986152 /var/lib/minikube/build/build.713376579 --cgroup-manager=cgroupfs: (2.199166464s)
I1216 02:36:03.034834   67702 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.713376579
I1216 02:36:03.042932   67702 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.713376579.tar
I1216 02:36:03.050438   67702 build_images.go:218] Built localhost/my-image:functional-986152 from /tmp/build.713376579.tar
I1216 02:36:03.050463   67702 build_images.go:134] succeeded building to: functional-986152
I1216 02:36:03.050468   67702 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image load --daemon kicbase/echo-server:functional-986152 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-986152 image load --daemon kicbase/echo-server:functional-986152 --alsologtostderr: (5.300173265s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image load --daemon kicbase/echo-server:functional-986152 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-986152
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image load --daemon kicbase/echo-server:functional-986152 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image save kicbase/echo-server:functional-986152 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image rm kicbase/echo-server:functional-986152 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-986152
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-986152 image save --daemon kicbase/echo-server:functional-986152 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-986152
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 02:37:25.720555    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:37:53.429401    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m52.328487919s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (113.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 kubectl -- rollout status deployment/busybox: (2.852966608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-cp97j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-dbt9z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-pz79w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-cp97j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-dbt9z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-pz79w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-cp97j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-dbt9z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-pz79w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-cp97j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-cp97j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-dbt9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-dbt9z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-pz79w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 kubectl -- exec busybox-7b57f96db7-pz79w -- sh -c "ping -c 1 192.168.49.1"
E1216 02:38:12.671626    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:12.677982    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:12.689372    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:12.710849    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:12.752204    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node add --alsologtostderr -v 5
E1216 02:38:12.834207    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:12.996294    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:13.318485    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:13.959999    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:15.241655    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:17.803655    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:22.925567    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:33.166947    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:53.648224    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 node add --alsologtostderr -v 5: (55.752395422s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-905733 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp testdata/cp-test.txt ha-905733:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile857010286/001/cp-test_ha-905733.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733:/home/docker/cp-test.txt ha-905733-m02:/home/docker/cp-test_ha-905733_ha-905733-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test_ha-905733_ha-905733-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733:/home/docker/cp-test.txt ha-905733-m03:/home/docker/cp-test_ha-905733_ha-905733-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test_ha-905733_ha-905733-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733:/home/docker/cp-test.txt ha-905733-m04:/home/docker/cp-test_ha-905733_ha-905733-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test_ha-905733_ha-905733-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp testdata/cp-test.txt ha-905733-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile857010286/001/cp-test_ha-905733-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m02:/home/docker/cp-test.txt ha-905733:/home/docker/cp-test_ha-905733-m02_ha-905733.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test_ha-905733-m02_ha-905733.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m02:/home/docker/cp-test.txt ha-905733-m03:/home/docker/cp-test_ha-905733-m02_ha-905733-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test_ha-905733-m02_ha-905733-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m02:/home/docker/cp-test.txt ha-905733-m04:/home/docker/cp-test_ha-905733-m02_ha-905733-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test_ha-905733-m02_ha-905733-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp testdata/cp-test.txt ha-905733-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile857010286/001/cp-test_ha-905733-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m03:/home/docker/cp-test.txt ha-905733:/home/docker/cp-test_ha-905733-m03_ha-905733.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test_ha-905733-m03_ha-905733.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m03:/home/docker/cp-test.txt ha-905733-m02:/home/docker/cp-test_ha-905733-m03_ha-905733-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test_ha-905733-m03_ha-905733-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m03:/home/docker/cp-test.txt ha-905733-m04:/home/docker/cp-test_ha-905733-m03_ha-905733-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test_ha-905733-m03_ha-905733-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp testdata/cp-test.txt ha-905733-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile857010286/001/cp-test_ha-905733-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m04:/home/docker/cp-test.txt ha-905733:/home/docker/cp-test_ha-905733-m04_ha-905733.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733 "sudo cat /home/docker/cp-test_ha-905733-m04_ha-905733.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m04:/home/docker/cp-test.txt ha-905733-m02:/home/docker/cp-test_ha-905733-m04_ha-905733-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m02 "sudo cat /home/docker/cp-test_ha-905733-m04_ha-905733-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 cp ha-905733-m04:/home/docker/cp-test.txt ha-905733-m03:/home/docker/cp-test_ha-905733-m04_ha-905733-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 ssh -n ha-905733-m03 "sudo cat /home/docker/cp-test_ha-905733-m04_ha-905733-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node stop m02 --alsologtostderr -v 5
E1216 02:39:34.610561    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 node stop m02 --alsologtostderr -v 5: (12.585889097s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5: exit status 7 (695.786302ms)

                                                
                                                
-- stdout --
	ha-905733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-905733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-905733-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-905733-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:39:40.370492   88099 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:39:40.370582   88099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:39:40.370586   88099 out.go:374] Setting ErrFile to fd 2...
	I1216 02:39:40.370590   88099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:39:40.370772   88099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:39:40.370987   88099 out.go:368] Setting JSON to false
	I1216 02:39:40.371013   88099 mustload.go:66] Loading cluster: ha-905733
	I1216 02:39:40.371137   88099 notify.go:221] Checking for updates...
	I1216 02:39:40.371375   88099 config.go:182] Loaded profile config "ha-905733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:39:40.371386   88099 status.go:174] checking status of ha-905733 ...
	I1216 02:39:40.371803   88099 cli_runner.go:164] Run: docker container inspect ha-905733 --format={{.State.Status}}
	I1216 02:39:40.391718   88099 status.go:371] ha-905733 host status = "Running" (err=<nil>)
	I1216 02:39:40.391741   88099 host.go:66] Checking if "ha-905733" exists ...
	I1216 02:39:40.392086   88099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-905733
	I1216 02:39:40.411763   88099 host.go:66] Checking if "ha-905733" exists ...
	I1216 02:39:40.412050   88099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:39:40.412089   88099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-905733
	I1216 02:39:40.431905   88099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/ha-905733/id_rsa Username:docker}
	I1216 02:39:40.527142   88099 ssh_runner.go:195] Run: systemctl --version
	I1216 02:39:40.533482   88099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:39:40.545723   88099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:39:40.605568   88099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 02:39:40.595622331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:39:40.606166   88099 kubeconfig.go:125] found "ha-905733" server: "https://192.168.49.254:8443"
	I1216 02:39:40.606202   88099 api_server.go:166] Checking apiserver status ...
	I1216 02:39:40.606254   88099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:39:40.617745   88099 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup
	W1216 02:39:40.625896   88099 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:39:40.625972   88099 ssh_runner.go:195] Run: ls
	I1216 02:39:40.629933   88099 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 02:39:40.634153   88099 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 02:39:40.634169   88099 status.go:463] ha-905733 apiserver status = Running (err=<nil>)
	I1216 02:39:40.634178   88099 status.go:176] ha-905733 status: &{Name:ha-905733 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:39:40.634208   88099 status.go:174] checking status of ha-905733-m02 ...
	I1216 02:39:40.634413   88099 cli_runner.go:164] Run: docker container inspect ha-905733-m02 --format={{.State.Status}}
	I1216 02:39:40.651947   88099 status.go:371] ha-905733-m02 host status = "Stopped" (err=<nil>)
	I1216 02:39:40.651966   88099 status.go:384] host is not running, skipping remaining checks
	I1216 02:39:40.651972   88099 status.go:176] ha-905733-m02 status: &{Name:ha-905733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:39:40.651993   88099 status.go:174] checking status of ha-905733-m03 ...
	I1216 02:39:40.652251   88099 cli_runner.go:164] Run: docker container inspect ha-905733-m03 --format={{.State.Status}}
	I1216 02:39:40.669575   88099 status.go:371] ha-905733-m03 host status = "Running" (err=<nil>)
	I1216 02:39:40.669603   88099 host.go:66] Checking if "ha-905733-m03" exists ...
	I1216 02:39:40.669988   88099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-905733-m03
	I1216 02:39:40.687142   88099 host.go:66] Checking if "ha-905733-m03" exists ...
	I1216 02:39:40.687386   88099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:39:40.687421   88099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-905733-m03
	I1216 02:39:40.705109   88099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/ha-905733-m03/id_rsa Username:docker}
	I1216 02:39:40.799792   88099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:39:40.812705   88099 kubeconfig.go:125] found "ha-905733" server: "https://192.168.49.254:8443"
	I1216 02:39:40.812734   88099 api_server.go:166] Checking apiserver status ...
	I1216 02:39:40.812774   88099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:39:40.824548   88099 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W1216 02:39:40.833655   88099 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:39:40.833699   88099 ssh_runner.go:195] Run: ls
	I1216 02:39:40.837442   88099 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 02:39:40.841751   88099 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 02:39:40.841769   88099 status.go:463] ha-905733-m03 apiserver status = Running (err=<nil>)
	I1216 02:39:40.841778   88099 status.go:176] ha-905733-m03 status: &{Name:ha-905733-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:39:40.841790   88099 status.go:174] checking status of ha-905733-m04 ...
	I1216 02:39:40.842109   88099 cli_runner.go:164] Run: docker container inspect ha-905733-m04 --format={{.State.Status}}
	I1216 02:39:40.861421   88099 status.go:371] ha-905733-m04 host status = "Running" (err=<nil>)
	I1216 02:39:40.861443   88099 host.go:66] Checking if "ha-905733-m04" exists ...
	I1216 02:39:40.861713   88099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-905733-m04
	I1216 02:39:40.878615   88099 host.go:66] Checking if "ha-905733-m04" exists ...
	I1216 02:39:40.878916   88099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:39:40.878961   88099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-905733-m04
	I1216 02:39:40.896092   88099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/ha-905733-m04/id_rsa Username:docker}
	I1216 02:39:40.991720   88099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:39:41.003998   88099 status.go:176] ha-905733-m04 status: &{Name:ha-905733-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 node start m02 --alsologtostderr -v 5: (13.196380597s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 stop --alsologtostderr -v 5
E1216 02:40:34.789718    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:34.796192    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:34.807630    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:34.829096    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:34.870598    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:34.952077    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:35.113635    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:35.435331    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:36.077552    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:37.359140    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:39.921189    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:45.042981    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 stop --alsologtostderr -v 5: (51.931904433s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 start --wait true --alsologtostderr -v 5
E1216 02:40:55.285249    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:56.532010    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:41:15.766783    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 start --wait true --alsologtostderr -v 5: (1m7.640325749s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node delete m03 --alsologtostderr -v 5
E1216 02:41:56.728395    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 node delete m03 --alsologtostderr -v 5: (9.75145535s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 stop --alsologtostderr -v 5
E1216 02:42:25.720122    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 stop --alsologtostderr -v 5: (47.55910268s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5: exit status 7 (117.692648ms)

                                                
                                                
-- stdout --
	ha-905733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-905733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-905733-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:42:55.403666  102493 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:42:55.403960  102493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:42:55.403970  102493 out.go:374] Setting ErrFile to fd 2...
	I1216 02:42:55.403974  102493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:42:55.404176  102493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:42:55.404328  102493 out.go:368] Setting JSON to false
	I1216 02:42:55.404350  102493 mustload.go:66] Loading cluster: ha-905733
	I1216 02:42:55.404453  102493 notify.go:221] Checking for updates...
	I1216 02:42:55.404731  102493 config.go:182] Loaded profile config "ha-905733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:42:55.404746  102493 status.go:174] checking status of ha-905733 ...
	I1216 02:42:55.405190  102493 cli_runner.go:164] Run: docker container inspect ha-905733 --format={{.State.Status}}
	I1216 02:42:55.425896  102493 status.go:371] ha-905733 host status = "Stopped" (err=<nil>)
	I1216 02:42:55.425932  102493 status.go:384] host is not running, skipping remaining checks
	I1216 02:42:55.425940  102493 status.go:176] ha-905733 status: &{Name:ha-905733 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:42:55.425979  102493 status.go:174] checking status of ha-905733-m02 ...
	I1216 02:42:55.426207  102493 cli_runner.go:164] Run: docker container inspect ha-905733-m02 --format={{.State.Status}}
	I1216 02:42:55.444249  102493 status.go:371] ha-905733-m02 host status = "Stopped" (err=<nil>)
	I1216 02:42:55.444271  102493 status.go:384] host is not running, skipping remaining checks
	I1216 02:42:55.444295  102493 status.go:176] ha-905733-m02 status: &{Name:ha-905733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:42:55.444315  102493 status.go:174] checking status of ha-905733-m04 ...
	I1216 02:42:55.444610  102493 cli_runner.go:164] Run: docker container inspect ha-905733-m04 --format={{.State.Status}}
	I1216 02:42:55.462309  102493 status.go:371] ha-905733-m04 host status = "Stopped" (err=<nil>)
	I1216 02:42:55.462350  102493 status.go:384] host is not running, skipping remaining checks
	I1216 02:42:55.462359  102493 status.go:176] ha-905733-m04 status: &{Name:ha-905733-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 02:43:12.673241    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:43:18.651731    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:43:40.374197    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.199301358s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-905733 node add --control-plane --alsologtostderr -v 5: (39.754494688s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-905733 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-620276 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-620276 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.906858278s)
--- PASS: TestJSONOutput/start/Command (40.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-620276 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-620276 --output=json --user=testUser: (6.113905256s)
--- PASS: TestJSONOutput/stop/Command (6.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-688603 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-688603 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.072303ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b5a8b13-7b89-4092-b4c2-78299a30c803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-688603] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f97b89c-e5c6-417c-8cae-a8cc248f5da4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"59a313f9-ec79-40f3-89fb-d2717015af6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b807c6bc-eff2-4ebd-9610-eaf6b83a0e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig"}}
	{"specversion":"1.0","id":"76101ef1-aabc-45ab-8631-9c0bca13d8ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube"}}
	{"specversion":"1.0","id":"7680d462-f9f4-4d4f-85f6-251f71a496ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9c0c8e03-22ca-48ea-99c4-7cc9444e946a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fe6bb7c1-0519-4d8f-820f-44fe143fe8a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-688603" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-688603
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-553295 --network=
E1216 02:45:34.788727    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-553295 --network=: (28.236497774s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-553295" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-553295
E1216 02:46:02.493778    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-553295: (2.161985096s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-988731 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-988731 --network=bridge: (19.810472029s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-988731" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-988731
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-988731: (2.025327233s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.86s)

                                                
                                    
x
+
TestKicExistingNetwork (25.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 02:46:25.579971    8586 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 02:46:25.596670    8586 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 02:46:25.596735    8586 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 02:46:25.596752    8586 cli_runner.go:164] Run: docker network inspect existing-network
W1216 02:46:25.612873    8586 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 02:46:25.612900    8586 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 02:46:25.612915    8586 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 02:46:25.613046    8586 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 02:46:25.629759    8586 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1a1332fcbeca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:af:eb:c9:8b:0a} reservation:<nil>}
I1216 02:46:25.630275    8586 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f97320}
I1216 02:46:25.630308    8586 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 02:46:25.630358    8586 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 02:46:25.676680    8586 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-706689 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-706689 --network=existing-network: (23.800905413s)
helpers_test.go:176: Cleaning up "existing-network-706689" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-706689
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-706689: (1.990607834s)
I1216 02:46:51.485394    8586 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.92s)

                                                
                                    
x
+
TestKicCustomSubnet (23.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-103485 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-103485 --subnet=192.168.60.0/24: (21.71707892s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-103485 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-103485" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-103485
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-103485: (2.154869089s)
--- PASS: TestKicCustomSubnet (23.89s)

                                                
                                    
x
+
TestKicStaticIP (26.99s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-363891 --static-ip=192.168.200.200
E1216 02:47:25.720092    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-363891 --static-ip=192.168.200.200: (24.651021339s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-363891 ip
helpers_test.go:176: Cleaning up "static-ip-363891" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-363891
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-363891: (2.185234965s)
--- PASS: TestKicStaticIP (26.99s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-186958 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-186958 --driver=docker  --container-runtime=crio: (22.36821122s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-189814 --driver=docker  --container-runtime=crio
E1216 02:48:12.671977    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-189814 --driver=docker  --container-runtime=crio: (22.338591412s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-186958
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-189814
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-189814" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-189814
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-189814: (2.376232399s)
helpers_test.go:176: Cleaning up "first-186958" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-186958
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-186958: (2.359966548s)
--- PASS: TestMinikubeProfile (50.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-499978 --memory=3072 --mount-string /tmp/TestMountStartserial275646711/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-499978 --memory=3072 --mount-string /tmp/TestMountStartserial275646711/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.80684569s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-499978 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-514543 --memory=3072 --mount-string /tmp/TestMountStartserial275646711/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-514543 --memory=3072 --mount-string /tmp/TestMountStartserial275646711/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.83348562s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-514543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-499978 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-499978 --alsologtostderr -v=5: (1.694615438s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-514543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-514543
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-514543: (1.258936095s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-514543
E1216 02:48:48.792496    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-514543: (6.216170395s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-514543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453596 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453596 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m29.75043139s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-453596 -- rollout status deployment/busybox: (2.014716963s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-ltzr4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-qcbjc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-ltzr4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-qcbjc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-ltzr4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-qcbjc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-ltzr4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-ltzr4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-qcbjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453596 -- exec busybox-7b57f96db7-qcbjc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453596 -v=5 --alsologtostderr
E1216 02:50:34.790269    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-453596 -v=5 --alsologtostderr: (52.500029126s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.15s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-453596 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp testdata/cp-test.txt multinode-453596:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile676078779/001/cp-test_multinode-453596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596:/home/docker/cp-test.txt multinode-453596-m02:/home/docker/cp-test_multinode-453596_multinode-453596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test_multinode-453596_multinode-453596-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596:/home/docker/cp-test.txt multinode-453596-m03:/home/docker/cp-test_multinode-453596_multinode-453596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test_multinode-453596_multinode-453596-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp testdata/cp-test.txt multinode-453596-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile676078779/001/cp-test_multinode-453596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m02:/home/docker/cp-test.txt multinode-453596:/home/docker/cp-test_multinode-453596-m02_multinode-453596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test_multinode-453596-m02_multinode-453596.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m02:/home/docker/cp-test.txt multinode-453596-m03:/home/docker/cp-test_multinode-453596-m02_multinode-453596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test_multinode-453596-m02_multinode-453596-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp testdata/cp-test.txt multinode-453596-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile676078779/001/cp-test_multinode-453596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m03:/home/docker/cp-test.txt multinode-453596:/home/docker/cp-test_multinode-453596-m03_multinode-453596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596 "sudo cat /home/docker/cp-test_multinode-453596-m03_multinode-453596.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 cp multinode-453596-m03:/home/docker/cp-test.txt multinode-453596-m02:/home/docker/cp-test_multinode-453596-m03_multinode-453596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 ssh -n multinode-453596-m02 "sudo cat /home/docker/cp-test_multinode-453596-m03_multinode-453596-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-453596 node stop m03: (1.273603553s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453596 status: exit status 7 (497.268026ms)

                                                
                                                
-- stdout --
	multinode-453596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr: exit status 7 (503.600602ms)

                                                
                                                
-- stdout --
	multinode-453596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:51:35.983087  162375 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:51:35.983345  162375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:51:35.983353  162375 out.go:374] Setting ErrFile to fd 2...
	I1216 02:51:35.983358  162375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:51:35.983541  162375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:51:35.983697  162375 out.go:368] Setting JSON to false
	I1216 02:51:35.983720  162375 mustload.go:66] Loading cluster: multinode-453596
	I1216 02:51:35.983834  162375 notify.go:221] Checking for updates...
	I1216 02:51:35.984116  162375 config.go:182] Loaded profile config "multinode-453596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:51:35.984136  162375 status.go:174] checking status of multinode-453596 ...
	I1216 02:51:35.984535  162375 cli_runner.go:164] Run: docker container inspect multinode-453596 --format={{.State.Status}}
	I1216 02:51:36.006674  162375 status.go:371] multinode-453596 host status = "Running" (err=<nil>)
	I1216 02:51:36.006700  162375 host.go:66] Checking if "multinode-453596" exists ...
	I1216 02:51:36.007003  162375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-453596
	I1216 02:51:36.025599  162375 host.go:66] Checking if "multinode-453596" exists ...
	I1216 02:51:36.025854  162375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:51:36.025908  162375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-453596
	I1216 02:51:36.044343  162375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/multinode-453596/id_rsa Username:docker}
	I1216 02:51:36.140297  162375 ssh_runner.go:195] Run: systemctl --version
	I1216 02:51:36.146635  162375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:51:36.159078  162375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:51:36.216175  162375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-16 02:51:36.206401505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 02:51:36.216753  162375 kubeconfig.go:125] found "multinode-453596" server: "https://192.168.67.2:8443"
	I1216 02:51:36.216784  162375 api_server.go:166] Checking apiserver status ...
	I1216 02:51:36.216842  162375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:51:36.228304  162375 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1216 02:51:36.236911  162375 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:51:36.236967  162375 ssh_runner.go:195] Run: ls
	I1216 02:51:36.240616  162375 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 02:51:36.244802  162375 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 02:51:36.244847  162375 status.go:463] multinode-453596 apiserver status = Running (err=<nil>)
	I1216 02:51:36.244868  162375 status.go:176] multinode-453596 status: &{Name:multinode-453596 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:51:36.244888  162375 status.go:174] checking status of multinode-453596-m02 ...
	I1216 02:51:36.245112  162375 cli_runner.go:164] Run: docker container inspect multinode-453596-m02 --format={{.State.Status}}
	I1216 02:51:36.262635  162375 status.go:371] multinode-453596-m02 host status = "Running" (err=<nil>)
	I1216 02:51:36.262659  162375 host.go:66] Checking if "multinode-453596-m02" exists ...
	I1216 02:51:36.262943  162375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-453596-m02
	I1216 02:51:36.280562  162375 host.go:66] Checking if "multinode-453596-m02" exists ...
	I1216 02:51:36.280868  162375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:51:36.280922  162375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-453596-m02
	I1216 02:51:36.299982  162375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22158-5058/.minikube/machines/multinode-453596-m02/id_rsa Username:docker}
	I1216 02:51:36.394184  162375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:51:36.406267  162375 status.go:176] multinode-453596-m02 status: &{Name:multinode-453596-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:51:36.406319  162375 status.go:174] checking status of multinode-453596-m03 ...
	I1216 02:51:36.406568  162375 cli_runner.go:164] Run: docker container inspect multinode-453596-m03 --format={{.State.Status}}
	I1216 02:51:36.425628  162375 status.go:371] multinode-453596-m03 host status = "Stopped" (err=<nil>)
	I1216 02:51:36.425647  162375 status.go:384] host is not running, skipping remaining checks
	I1216 02:51:36.425652  162375 status.go:176] multinode-453596-m03 status: &{Name:multinode-453596-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-453596 node start m03 -v=5 --alsologtostderr: (6.508525668s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453596
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-453596
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-453596: (31.344269956s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453596 --wait=true -v=5 --alsologtostderr
E1216 02:52:25.720077    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453596 --wait=true -v=5 --alsologtostderr: (47.294926346s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453596
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-453596 node delete m03: (4.625825171s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 stop
E1216 02:53:12.671561    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-453596 stop: (30.173302155s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453596 status: exit status 7 (97.00351ms)

                                                
                                                
-- stdout --
	multinode-453596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-453596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr: exit status 7 (100.560999ms)

                                                
                                                
-- stdout --
	multinode-453596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-453596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:53:37.937539  172228 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:53:37.937649  172228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:53:37.937657  172228 out.go:374] Setting ErrFile to fd 2...
	I1216 02:53:37.937664  172228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:53:37.937905  172228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:53:37.938098  172228 out.go:368] Setting JSON to false
	I1216 02:53:37.938129  172228 mustload.go:66] Loading cluster: multinode-453596
	I1216 02:53:37.938201  172228 notify.go:221] Checking for updates...
	I1216 02:53:37.938539  172228 config.go:182] Loaded profile config "multinode-453596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:53:37.938555  172228 status.go:174] checking status of multinode-453596 ...
	I1216 02:53:37.939066  172228 cli_runner.go:164] Run: docker container inspect multinode-453596 --format={{.State.Status}}
	I1216 02:53:37.960140  172228 status.go:371] multinode-453596 host status = "Stopped" (err=<nil>)
	I1216 02:53:37.960169  172228 status.go:384] host is not running, skipping remaining checks
	I1216 02:53:37.960181  172228 status.go:176] multinode-453596 status: &{Name:multinode-453596 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:53:37.960232  172228 status.go:174] checking status of multinode-453596-m02 ...
	I1216 02:53:37.960472  172228 cli_runner.go:164] Run: docker container inspect multinode-453596-m02 --format={{.State.Status}}
	I1216 02:53:37.978979  172228 status.go:371] multinode-453596-m02 host status = "Stopped" (err=<nil>)
	I1216 02:53:37.978998  172228 status.go:384] host is not running, skipping remaining checks
	I1216 02:53:37.979004  172228 status.go:176] multinode-453596-m02 status: &{Name:multinode-453596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453596 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453596 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.380353379s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453596 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453596
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453596-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-453596-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.690124ms)

                                                
                                                
-- stdout --
	* [multinode-453596-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-453596-m02' is duplicated with machine name 'multinode-453596-m02' in profile 'multinode-453596'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453596-m03 --driver=docker  --container-runtime=crio
E1216 02:54:35.735760    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453596-m03 --driver=docker  --container-runtime=crio: (23.03065354s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453596
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-453596: exit status 80 (286.699884ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-453596 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-453596-m03 already exists in multinode-453596-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-453596-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-453596-m03: (2.382824503s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.84s)

                                                
                                    
x
+
TestPreload (100.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-840781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1216 02:55:34.789673    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-840781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (45.693839402s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-840781 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-840781 image pull gcr.io/k8s-minikube/busybox: (1.388779292s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-840781
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-840781: (6.229513114s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-840781 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-840781 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.85271436s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-840781 image list
helpers_test.go:176: Cleaning up "test-preload-840781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-840781
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-840781: (2.434711515s)
--- PASS: TestPreload (100.83s)

                                                
                                    
x
+
TestScheduledStopUnix (98.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-708409 --memory=3072 --driver=docker  --container-runtime=crio
E1216 02:56:57.855892    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-708409 --memory=3072 --driver=docker  --container-runtime=crio: (21.917240021s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-708409 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 02:57:01.816367  189272 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:57:01.816724  189272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:01.816735  189272 out.go:374] Setting ErrFile to fd 2...
	I1216 02:57:01.816739  189272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:01.816973  189272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:57:01.817257  189272 out.go:368] Setting JSON to false
	I1216 02:57:01.817349  189272 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:01.817642  189272 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:57:01.817706  189272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/config.json ...
	I1216 02:57:01.817905  189272 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:01.818019  189272 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-708409 -n scheduled-stop-708409
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 02:57:02.200522  189420 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:57:02.200750  189420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:02.200758  189420 out.go:374] Setting ErrFile to fd 2...
	I1216 02:57:02.200762  189420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:02.200956  189420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:57:02.201175  189420 out.go:368] Setting JSON to false
	I1216 02:57:02.201357  189420 daemonize_unix.go:73] killing process 189307 as it is an old scheduled stop
	I1216 02:57:02.201457  189420 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:02.201893  189420 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:57:02.201977  189420 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/config.json ...
	I1216 02:57:02.202193  189420 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:02.202317  189420 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 02:57:02.206484    8586 retry.go:31] will retry after 132.976µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.207653    8586 retry.go:31] will retry after 166.19µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.208788    8586 retry.go:31] will retry after 252.483µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.209940    8586 retry.go:31] will retry after 356.839µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.211066    8586 retry.go:31] will retry after 506.298µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.212179    8586 retry.go:31] will retry after 792.421µs: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.213300    8586 retry.go:31] will retry after 1.216964ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.215529    8586 retry.go:31] will retry after 1.141576ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.217737    8586 retry.go:31] will retry after 2.443542ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.220952    8586 retry.go:31] will retry after 4.4025ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.226153    8586 retry.go:31] will retry after 3.615143ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.230367    8586 retry.go:31] will retry after 10.634846ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.241569    8586 retry.go:31] will retry after 18.70471ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.260809    8586 retry.go:31] will retry after 23.198332ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
I1216 02:57:02.285099    8586 retry.go:31] will retry after 43.585303ms: open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-708409 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1216 02:57:25.720041    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-708409 -n scheduled-stop-708409
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-708409
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-708409 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 02:57:28.080683  190076 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:57:28.081142  190076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:28.081152  190076 out.go:374] Setting ErrFile to fd 2...
	I1216 02:57:28.081156  190076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:57:28.081376  190076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 02:57:28.081599  190076 out.go:368] Setting JSON to false
	I1216 02:57:28.081670  190076 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:28.081982  190076 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:57:28.082056  190076 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/scheduled-stop-708409/config.json ...
	I1216 02:57:28.082231  190076 mustload.go:66] Loading cluster: scheduled-stop-708409
	I1216 02:57:28.082318  190076 config.go:182] Loaded profile config "scheduled-stop-708409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1216 02:58:12.678128    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-708409
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-708409: exit status 7 (81.360878ms)

                                                
                                                
-- stdout --
	scheduled-stop-708409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-708409 -n scheduled-stop-708409
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-708409 -n scheduled-stop-708409: exit status 7 (76.873363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-708409" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-708409
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-708409: (4.69445834s)
--- PASS: TestScheduledStopUnix (98.10s)

                                                
                                    
x
+
TestInsufficientStorage (8.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-058217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-058217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.33629268s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c11accf7-c488-43e7-981a-15158661b650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-058217] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5603d6b0-adcc-4052-a8b6-ed34a73d7033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"f58ce9f5-a650-4a52-991e-3c9028246217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"22781815-6a07-43e9-be5e-37f76cd7a29c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig"}}
	{"specversion":"1.0","id":"ac5e6392-c376-42ff-8876-4bf44f66e65c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube"}}
	{"specversion":"1.0","id":"8501066a-6ff0-4945-bc4e-2838d4244648","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c37374e3-f373-4566-b242-e497380fe24c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fec3f295-9f07-49b8-ae52-f7ddbd85d8cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0ed08d5c-9a89-432e-bdff-3e9a25f2c024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a3d86b69-c12d-4ab8-9411-26fa72c9a990","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3f68b3f-67f5-4562-878e-f09aa69b1a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af68fbfe-4e78-4cbc-99a6-fa0930c22ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-058217\" primary control-plane node in \"insufficient-storage-058217\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c36f1a08-755e-49d1-b825-6e353c093e03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765575274-22117 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9edba070-4494-4097-9fc4-a65b4c9a6900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a00969ef-0bd3-437b-9c16-7847cd346914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-058217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-058217 --output=json --layout=cluster: exit status 7 (284.672563ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-058217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-058217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 02:58:24.548459  192606 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-058217" does not appear in /home/jenkins/minikube-integration/22158-5058/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-058217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-058217 --output=json --layout=cluster: exit status 7 (286.271192ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-058217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-058217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 02:58:24.835211  192719 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-058217" does not appear in /home/jenkins/minikube-integration/22158-5058/kubeconfig
	E1216 02:58:24.846308  192719 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/insufficient-storage-058217/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-058217" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-058217
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-058217: (1.888391644s)
--- PASS: TestInsufficientStorage (8.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (292.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1807361238 start -p running-upgrade-146373 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1807361238 start -p running-upgrade-146373 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.313177249s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-146373 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 03:00:34.789455    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-146373 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.971751327s)
helpers_test.go:176: Cleaning up "running-upgrade-146373" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-146373
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-146373: (2.573142292s)
--- PASS: TestRunningBinaryUpgrade (292.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (293.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.213038557s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-058433
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-058433: (1.929196467s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-058433 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-058433 status --format={{.Host}}: exit status 7 (80.707376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 03:02:25.720506    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.820897197s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-058433 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (100.313214ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-058433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-058433
	    minikube start -p kubernetes-upgrade-058433 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0584332 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-058433 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 03:05:34.789202    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-986152/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-058433 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.792985543s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-058433" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-058433
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-058433: (2.792605705s)
--- PASS: TestKubernetesUpgrade (293.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (66.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1509137890 start -p missing-upgrade-423691 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1509137890 start -p missing-upgrade-423691 --memory=3072 --driver=docker  --container-runtime=crio: (20.571810013s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-423691
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-423691: (1.964472969s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-423691
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-423691 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-423691 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.542326839s)
helpers_test.go:176: Cleaning up "missing-upgrade-423691" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-423691
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-423691: (2.391231936s)
--- PASS: TestMissingContainerUpgrade (66.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestPause/serial/Start (56.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-837191 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-837191 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.426499282s)
--- PASS: TestPause/serial/Start (56.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1402105896 start -p stopped-upgrade-863865 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1402105896 start -p stopped-upgrade-863865 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.305904541s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1402105896 -p stopped-upgrade-863865 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1402105896 -p stopped-upgrade-863865 stop: (3.065516522s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-863865 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-863865 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.321454594s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (306.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-837191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-837191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.544013315s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (79.293545ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-027639] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1216 03:03:12.671942    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.100920741s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027639 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.003891442s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027639 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-027639 status -o json: exit status 2 (325.535464ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-027639","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-027639
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-027639: (2.142152177s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-863865
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-863865: (1.005640774s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027639 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.131805582s)
--- PASS: TestNoKubernetes/serial/Start (7.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-646016 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-646016 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (168.363662ms)

                                                
                                                
-- stdout --
	* [false-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:03:41.199496  259224 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:03:41.199802  259224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:41.199828  259224 out.go:374] Setting ErrFile to fd 2...
	I1216 03:03:41.199834  259224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:03:41.200189  259224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5058/.minikube/bin
	I1216 03:03:41.200782  259224 out.go:368] Setting JSON to false
	I1216 03:03:41.202204  259224 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2773,"bootTime":1765851448,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:03:41.202280  259224 start.go:143] virtualization: kvm guest
	I1216 03:03:41.204869  259224 out.go:179] * [false-646016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:03:41.206519  259224 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:03:41.206526  259224 notify.go:221] Checking for updates...
	I1216 03:03:41.207991  259224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:03:41.209540  259224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5058/kubeconfig
	I1216 03:03:41.211043  259224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5058/.minikube
	I1216 03:03:41.212366  259224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:03:41.213631  259224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:03:41.215407  259224 config.go:182] Loaded profile config "NoKubernetes-027639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1216 03:03:41.215536  259224 config.go:182] Loaded profile config "kubernetes-upgrade-058433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:03:41.215656  259224 config.go:182] Loaded profile config "running-upgrade-146373": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 03:03:41.215777  259224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:03:41.239545  259224 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 03:03:41.239665  259224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:03:41.296513  259224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 03:03:41.287008323 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 03:03:41.296617  259224 docker.go:319] overlay module found
	I1216 03:03:41.298536  259224 out.go:179] * Using the docker driver based on user configuration
	I1216 03:03:41.299911  259224 start.go:309] selected driver: docker
	I1216 03:03:41.299925  259224 start.go:927] validating driver "docker" against <nil>
	I1216 03:03:41.299936  259224 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:03:41.301865  259224 out.go:203] 
	W1216 03:03:41.303179  259224 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 03:03:41.304285  259224 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-646016 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:01:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-058433
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:00:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-146373
contexts:
- context:
cluster: kubernetes-upgrade-058433
user: kubernetes-upgrade-058433
name: kubernetes-upgrade-058433
- context:
cluster: running-upgrade-146373
user: running-upgrade-146373
name: running-upgrade-146373
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-058433
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.key
- name: running-upgrade-146373
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-646016

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646016"

                                                
                                                
----------------------- debugLogs end: false-646016 [took: 3.430654839s] --------------------------------
helpers_test.go:176: Cleaning up "false-646016" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-646016
--- PASS: TestNetworkPlugins/group/false (3.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22158-5058/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027639 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027639 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.406343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-027639
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-027639: (1.306547945s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027639 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027639 --driver=docker  --container-runtime=crio: (9.091634041s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.953808269s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027639 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027639 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.608988ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (47.968349815s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-073001 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [68715bfa-1969-4519-9966-8409fc51c09f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [68715bfa-1969-4519-9966-8409fc51c09f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004550776s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-073001 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-307185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c5a8e168-08bb-4b5c-ab8b-3f7814bcd923] Pending
helpers_test.go:353: "busybox" [c5a8e168-08bb-4b5c-ab8b-3f7814bcd923] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c5a8e168-08bb-4b5c-ab8b-3f7814bcd923] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003672863s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-307185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-073001 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-073001 --alsologtostderr -v=3: (16.076264786s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-307185 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-307185 --alsologtostderr -v=3: (16.806267185s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.284871989s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001: exit status 7 (77.710508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-073001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073001 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.477594367s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073001 -n old-k8s-version-073001
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185: exit status 7 (89.464733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-307185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 03:05:28.794746    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-307185 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (48.092904202s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-307185 -n no-preload-307185
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33] Pending
helpers_test.go:353: "busybox" [82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [82e37b9d-9cbd-4f3b-bb01-1e9aa8b3db33] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00389759s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (24.302775433s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-079165 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-079165 --alsologtostderr -v=3: (16.301626438s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qgkcx" [0a9a2afa-30fa-49b2-83d7-e08d89a57451] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003544567s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qgkcx" [0a9a2afa-30fa-49b2-83d7-e08d89a57451] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003846375s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-073001 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ddfzf" [9d435c79-dbc4-4bc8-b6ec-8f7fdb17ce4c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004025715s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073001 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165: exit status 7 (101.310751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-079165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079165 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.877752915s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079165 -n default-k8s-diff-port-079165
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-991316 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-991316 --alsologtostderr -v=3: (8.285570304s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ddfzf" [9d435c79-dbc4-4bc8-b6ec-8f7fdb17ce4c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004128171s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-307185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-307185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316: exit status 7 (109.117028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-991316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-991316 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (12.626254552s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-991316 -n newest-cni-991316
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.493694978s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.915602271s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-991316 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.635320933s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s5jhg" [ba51e595-a2f2-45d2-9beb-e9ae0b9247dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003609567s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-742794 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [91384ee0-dd8e-4fb3-ad77-eb48d3412f6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [91384ee0-dd8e-4fb3-ad77-eb48d3412f6e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003413087s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-742794 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-646016 "pgrep -a kubelet"
I1216 03:07:01.454727    8586 config.go:182] Loaded profile config "auto-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fphn9" [fa45deba-a42c-4569-850d-aa319b5cb316] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fphn9" [fa45deba-a42c-4569-850d-aa319b5cb316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004869035s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s5jhg" [ba51e595-a2f2-45d2-9beb-e9ae0b9247dd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004567024s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-079165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079165 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-742794 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-742794 --alsologtostderr -v=3: (16.238329373s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.72504766s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-965fd" [fd986b13-1cf5-456b-8a34-898f17fbf9bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004094681s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794: exit status 7 (93.692998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-742794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 03:07:25.720209    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/addons-568105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-742794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (46.934402777s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742794 -n embed-certs-742794
I1216 03:08:12.602289    8586 config.go:182] Loaded profile config "calico-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-646016 "pgrep -a kubelet"
I1216 03:07:26.802135    8586 config.go:182] Loaded profile config "kindnet-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-chzd2" [7d57015f-67c3-4b41-887e-fa9b1b141885] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-chzd2" [7d57015f-67c3-4b41-887e-fa9b1b141885] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004975216s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.164976578s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.636990966s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-6fvlx" [47b47d83-36de-4c5b-8371-b1cab082975b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005006747s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-646016 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-646016 replace --force -f testdata/netcat-deployment.yaml
E1216 03:08:12.672114    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/functional-781918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rlsxz" [1d0ac62f-4f9b-4aba-9878-1b5d21a19d42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rlsxz" [1d0ac62f-4f9b-4aba-9878-1b5d21a19d42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004288958s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4srjf" [0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004382607s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4srjf" [0e3fb1ad-a5ab-41e6-94be-9b09ed1209a6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003540972s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-742794 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-646016 "pgrep -a kubelet"
I1216 03:08:21.739603    8586 config.go:182] Loaded profile config "custom-flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pb642" [6c6272e9-f4dc-4eed-8607-a72437cf634d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pb642" [6c6272e9-f4dc-4eed-8607-a72437cf634d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00404497s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-742794 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.830656032s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-646016 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.3223497s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-646016 "pgrep -a kubelet"
I1216 03:09:03.744382    8586 config.go:182] Loaded profile config "enable-default-cni-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rgrfs" [9ddac740-3f15-4c45-babc-19cc054ff512] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rgrfs" [9ddac740-3f15-4c45-babc-19cc054ff512] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004200587s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-6bjh2" [cf60b840-c313-41a9-8952-9e30c779e0f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003333298s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-646016 "pgrep -a kubelet"
I1216 03:09:33.253719    8586 config.go:182] Loaded profile config "flannel-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (7.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5rjgs" [c3c408fa-c590-49a8-bfa3-24b1007f220f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5rjgs" [c3c408fa-c590-49a8-bfa3-24b1007f220f] Running
E1216 03:09:39.859523    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:39.865931    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:39.877329    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:39.898792    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:39.940279    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:40.021734    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:09:40.183295    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 7.003816791s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (7.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-646016 exec deployment/netcat -- nslookup kubernetes.default
E1216 03:09:40.504997    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/old-k8s-version-073001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-646016 "pgrep -a kubelet"
E1216 03:09:50.176445    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1216 03:09:50.433207    8586 config.go:182] Loaded profile config "bridge-646016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-646016 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9rm9n" [4c5b0ced-c6c8-4e0d-8f93-cf39cc7fe631] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9rm9n" [4c5b0ced-c6c8-4e0d-8f93-cf39cc7fe631] Running
E1216 03:09:55.297777    8586 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/no-preload-307185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003182599s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-646016 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-646016 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.26
382 TestNetworkPlugins/group/kubenet 3.45
394 TestNetworkPlugins/group/cilium 3.83
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-899443" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-899443
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-646016 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:01:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-058433
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:00:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-146373
contexts:
- context:
cluster: kubernetes-upgrade-058433
user: kubernetes-upgrade-058433
name: kubernetes-upgrade-058433
- context:
cluster: running-upgrade-146373
user: running-upgrade-146373
name: running-upgrade-146373
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-058433
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.key
- name: running-upgrade-146373
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-646016

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646016"

                                                
                                                
----------------------- debugLogs end: kubenet-646016 [took: 3.269857484s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-646016" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-646016
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-646016 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-646016" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:01:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-058433
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5058/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:00:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-146373
contexts:
- context:
cluster: kubernetes-upgrade-058433
user: kubernetes-upgrade-058433
name: kubernetes-upgrade-058433
- context:
cluster: running-upgrade-146373
user: running-upgrade-146373
name: running-upgrade-146373
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-058433
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/kubernetes-upgrade-058433/client.key
- name: running-upgrade-146373
user:
client-certificate: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.crt
client-key: /home/jenkins/minikube-integration/22158-5058/.minikube/profiles/running-upgrade-146373/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-646016

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-646016" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646016"

                                                
                                                
----------------------- debugLogs end: cilium-646016 [took: 3.666160212s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-646016" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-646016
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
Copied to clipboard